Wednesday 26 September 2012

What Is Image Reconstruction?




Image reconstruction is the creation of a two- or three-dimensional image from scattered or incomplete data such as the radiation readings acquired during a medical imaging study. For some imaging techniques, it is necessary to apply a mathematical formula to generate a readable and usable image or to sharpen an image to make it useful. In computed tomography (CT) scanning, for example, image reconstruction can help generate a three-dimensional image of the body from a series of individual camera images.

Several issues pose a problem with image reconstruction. The first is noise — meaningless data that can interrupt the clarity of an image. In medical imaging, noise can occur as a result of patient movement, interference, shadowing and ghosting. For example, one structure in the body might overshadow another and make it hard to spot. Filtration for noise is one aspect of image reconstruction.

Another issue is scattered or incomplete data. With something such as an X-ray, the image is taken in one film exposure, where X-rays pass through the area of interest and create an image. In other techniques, a patient might be bombarded with radiation or subjected to a magnetic field, generating a substantial amount of data that needs to be assembled to create a picture. The immediate output is not readable or meaningful to a human, and it needs to be passed through an algorithm to generate a picture.


In image reconstruction, there are several approaches that can be taken to filter out the noise without discarding meaningful data and to process the data in a way that will make sense. Iterative reconstruction is a popular technique. The algorithm starts by mapping out low-frequency data, creating a few data points that form the start of the image. Then it overlays a slightly higher frequency, and a higher one, and so on, until a complete image is available.

Creation of a flat image isn't the only thing that can be done with image reconstruction. A computer also can create a simulated three-dimensional rendition of the data by stacking a series of images together. It needs to be able to sort through the data to match the slices appropriately and must overlay them accurately to create images of internal structures. This can help a doctor evaluate a problem in multiple planes, instead of just at the flat angles offered by single images.

Medicine is not the only field in which image reconstruction can be useful. It also can be valuable in archeology, in which researchers might want to investigate finds without damaging them. With image reconstruction, they can get images of mummies, sealed containers and other objects of interest to learn what is inside.

How Do I Become a Stereographer?




Still photography and motion-picture photography are the two types of stereography, and, fundamentally, a person uses the same techniques to become a stereographer in either field. Essentially, a person needs to have a strong background in the style of photography that he or she wants to pursue as a stereographer. Although a few successful stereographers are self-taught, most have formal training in their field, including college courses, art school classes, and workshops. Stereography is a complicated process, and generally people do not learn it through simple workshops, but rather through years of experience. Apprenticeships and internships are hard to acquire because the employers sign on only the best applicants, but these are some of the best ways to learn to become a stereographer.

Stereography is the art of using two almost identical photographs to create a three-dimensional (3D) image. The viewer uses special glasses or a stereoscope to see the 3D image. Originally, stereography became popular in the mid-1800s, and with modern technology, it has expanded into motion picture and television mediums. Often people experiment with stereography using clay animation movies. Creating your own 3D films is a good way to show prospective employers your skill level.


To become a stereographer, you should get a college degree majoring in still or motion photography or be able to show prospective employers that you have several years of experience in the field. Reading a variety of job descriptions and employer requirements can help you tailor your schooling or self-guided learning. Often, stereographers are more marketable if they have good computer skills.

Modern stereography uses specialized computer software and camera hardware. Most employers expect their employees, interns, or apprentices to be proficient with the necessary software. Other skills usually help a stereographer's career. An example of one of these skills is a thorough knowledge of cameras to the extent that a stereographer can create custom cameras to meet the director's needs. To become a stereographer, you need to realize that knowledge and skill alone do not necessarily ensure a successful career.

Typically, a good stereographer has the ability to think in 3D. This is a different perspective than normal 2D photography. Frequently, workshops or classes can introduce the prospective stereographer to the challenges of thinking in 3D. By practicing creating stereography works, watching professionals work, and experimenting with new techniques, usually a person can become proficient in this skill.

Employers judge applicants by viewing their work. To become a stereographer, it is essential to create a portfolio of your work, of both your professional and private pieces. If the equipment is too expensive, enroll in classes or workshops where you can use the school's equipment. Many stereographers find their skills are more marketable if they can write software programs and create cameras. Therefore, it is advisable to highlight these talents in your portfolio of your visual effects artistry.

What Is Volumetric Display?




A volumetric display is a type of graphic display device that can create three-dimensional (3D) images. Images from volumetric displays are truly 3D and have width, height, and length. This makes the images more realistic than simulated 3D graphics, such as those that are shown on a standard flat display screen.

Unlike some traditional 3D graphics, volumetric displays do not require special goggles in order for a 3D image to be seen. The three-dimensional graphics created by this type of display can be seen from any angle. Multiple people can view the image of a volumetric monitor at once, and each viewer can observe the picture from a different perspective. This provides viewers with a very natural viewing experience.

Several different methods can be used to create volumetric graphics. One type uses a technique known as a "swept surface." A swept surface volumetric display employs a visual trick called persistence of vision. The human eye views rapidly moving light as a single image, such as the arc of light that appears when a flashlight is waved quickly through the air. Many volumetric devices use fast-moving lit surfaces to create the illusion of a solid shape.


A second method that is used to display volumetric 3D images is called static volume. In this technique, there are no moving parts in the visible area of the display. Instead, mirrors and lenses are used to direct a bright light such as a laser. Very fast pulses of laser light are aimed at different points in the air. Persistence of vision convinces the eye that these points of light are part of a single solid object.

Volumetric display devices have many different applications. These types of displays are useful for medical training and diagnosis. A 3D display, for instance, can show a realistic image of a skull or heart and allow a group of medical students to study the structure from every angle. Volumetric displays are also useful for architects and builders, who can visualize a construction project in three dimensions.

Research is ongoing regarding methods of interacting with volumetric displays. Sensors may allow users to manipulate and adjust the graphics without the use of a keyboard. A camera connected to a display, for example, can track hand motions and rotate images as needed. These advanced methods of volumetric interaction allow for a very intuitive experience, where users can literally reach out and touch three dimensional images.

What Is 3D Image Processing?




Three-dimensional (3D) image processing is the method by which a two-dimensional (2D) image becomes a 3D image, usually from model building and rendering. To create the image, 3D image processing starts with an object’s mesh skeleton, which contains many different lines and volume data to correctly represent the 3D space. After the model is built, it is rendered and many different 2D views are captured to create the 3D effect. Entertainment and architecture workers use 3D image processing to build realistic models for movies and buildings, respectively. Doctors also use 3D image processing, because it helps doctors visualize problems, whether internal issues with a patient or for research purposes.

To start 3D image processing, a mesh object is required. This can either be created from an image processing program, in which users create lines to build up the mesh skeleton, or a 3D scanner can be used to capture the information. Regardless of the technique, the mesh skeleton contains volume and depth information that the computer understands, making it into a 3D model. At this stage, the model does not have any color or texture; it is just a bunch of lines that represent the model’s shape and size.


Rendering is the next stage in 3D image processing. Designers place colors and textures over the 3D model to make it look realistic. This makes it easier for people to see and understand the image. To make this 3D, the computer takes many different 2D screenshots until it captures every angle, so when the user moves the object, it appears 3D.

The entertainment and architecture industries extensively use 3D image processing to build models for use. Both go through the same process of creating a model and rendering it, but the difference is in how the model is used. In entertainment, the model is meant to move around and interact with actors. Architects use the model so clients can easily visualize the building when it is finished and to make construction easier.

Medical science also makes use of 3D image processing, for both diagnosis and research. In diagnosis, a camera will take pictures of someone’s insides, and the camera will be able to create a 3D model of an organ or section that doctors can examine. For research, doctors will be able to watch and study models to see how they react over time; this also helps newcomers to the medical field visualize how internal parts look.

What Is PDF Rasterization?



The process of converting the codes contained in a portable document format (PDF) file into a two-dimensional (2D) image is known as PDF rasterization. The information stored in a PDF file can give a program or device instructions on how to display the document but, when being viewed on a screen, the results must be drawn in a 2D space. Depending on the type of objects used in a PDF document, the process of PDF rasterization can sometimes be accelerated through the use of graphics hardware, much in the same way that three-dimensional (3D) graphics are calculated. There are a number of complex issues associated with PDF rasterization, especially if a document includes dynamic interactive elements or programming scripts that rely on external objects that are not easily converted into a static 2D image.


A PDF document is stored as a series of instructions and numbers that can tell a program how to draw not only the text on a page, but also any graphics that are required, whether they are compressed images or vector-based line art. PDF files store information in this way so it can be completely independent of the device being used to render, display or print it without any loss of quality. Even though there are devices — such as PostScript printers or vector-based displays — that can display a PDF document natively, most practical systems need to convert the stored instructions into a 2D image so they can be used by hardware such as monitors and home printers.

PDF rasterization involves using mathematical formulas and some other techniques to translate objects such as Bezier curves, lines and fonts onto a flat area, pixel by pixel. The PDF file saves how to draw the information, so a rasterization image processor (RIP) can make the PDF document as large or small as desired without any loss of quality. One instance in which this might not be true involves photographic-style image files that are embedded or encoded into a PDF document and the number of pixels is already set and cannot be scaled without interpolation that could degrade the quality.

Many computers perform PDF rasterization on a daily basis. A PDF reader, such as those used in web browsers, can quickly render PDF files so they can be read, although the speed of display is sometimes made possible by a reduction in quality as the program takes rendering shortcuts. Whenever a PDF document is printed, it also must be rasterized before being sent to the hardware. Mobile devices often have PDF rasterization functionality built directly into their operating systems to allow for accurate hardware accelerated rendering, no matter what the size of the output field.

What Is 3D Photography?




Three-dimensional (3D) photography, or stereoscopic photography, is a method of taking photos that presents the images to the human eyes individually, mimicking what the brain does as it compiles imagery from the left and right eyes to interpret depth. There are multiple ways to capture the desired images, different ways to view the images, and a myriad of software available to enhance, process, or display the images. Three-dimensional (3D) photography can be accomplished with a single camera of any variety or with a dual-camera setup and with little to no specialized training.

Action shots are virtually impossible utilizing a single camera for 3D photography. Stationary 3D images can be created, however, by taking one exposure immediately after another with a single lens camera. Utilization of two lenses, whether accomplished with two cameras or a customized dual lens camera, is preferred so that the images can be taken as simultaneously as possible. Two cameras, ideally held together in a chassis, allow for 3D photography of moving scenes. The effect of the images can be enhanced by taking one of the pictures at a different distance or by using a downward angle.

There are a number of ways to view the images created by 3D photography. Sir David Brewster developed the first stereoscopic viewer in 1849 and displayed it at the 1851 Great Exhibition in London. Viewers and various methods of presenting stereoscopic images have evolved since then and have expanded to include digital projection and viewing on a computer screen.


The images can be presented side by side, overlapping, or alternating and are viewed with a binocular style viewer, specialized glasses, or the naked eye. The two most common naked eye viewing methods of 3D photography are cross-eyed and parallel. Other naked eye methods include lenticular and wobble, which is also referred to as wiggle. Binocular viewers allow for the display of images in stereoscopic pairs and can be found in a variety of designs.

One of the most commonly known methods of viewing 3D photography is via the use of anaglyphs. Anaglyph images are composed of two identical images presented in overlapping, or superimposed, red and blue color schemes. These images are viewed using glasses designed to converge the two color schemes to create the 3D impression. Polarization and digital projection are additional viewing methods that require specialized glasses. Viewing 3D imagery on a computer screen can be done utilizing a number of the above mentioned methods, including naked eye, anaglyphic, and polarization.

What is 3D Imaging?




3D imaging is a process to render a three-dimensional image on a two-dimensional surface by creating the optical illusion of depth. Generally, 3D imaging uses two still or motion camera lenses a slight distance apart to photograph a three-dimensional object. The process effectively duplicates the stereoscopic vision of human eyes. The image is reproduced as two flat images that viewers’ eyes see separately, creating a visual illusion of depth as their brains combine the images into a single one.

The spot where the left and right images overlap is the point of convergence. This point is generally the subject of the image as it is the clearest part of the image. Objects at the point of the convergence appear to exist on the surface screen. As objects in 3D imaging move further from the point of convergence, they appear either closer or further away from the viewer, creating the illusion of depth.

3D imaging is produced either as two separate images viewed side by side or as a single image with two overlapping elements. In stereoscopy, two static photographs are placed side by side and the viewer looks at the left and right images with each eye separately. Stereo photography dates back to early development of photography. This is a simpler process of 3D imagery only requiring two still cameras to produce two static images. These images also can be viewed by each eye independently without the aid of optical equipment.


A stereoscope is a device that holds the stereoscopic images on a single card or projects them the appropriate distance for the viewer to see the images in three dimensions. To see the image in three dimensions without a stereoscope, the viewer can look at both side-by-side images and cross his or her eyes until the images merge. In the combined overlapping, three images appear, the middle of which appears in three dimensions.

Single 3D images such as those used in 3D movies, are projected on a screen and are usually viewed with specialized optical equipment like 3D glasses or polarized lenses that split the two images for each eye. With the naked eye, these images appear like a double exposure. Early 3D movies used red and cyan filters. The 3D glasses contained red and cyan lenses, removing image produced by the other filter creating a separate image for each eye.

Modern 3D imaging instead splits the image with aid of polarized lenses. The process is essentially the same but does not distort the colors of the image like red and cyan filters do. Software programs create 3D imagery with various techniques to create the illusion of motion, by moving objects closer to the view more than those further away.