Pixel Study, continued investigations
Rewriting of the code in Processing allows for a more streamlined conversion of the data set. Computing power limits me to conversion of around a maximum of 1MP images at a time.
* * *
C O N C E P T
Represented here is the same image at the same scale, but expressed differently. Light intensity values are either presented as a gradient from black to white or as a number from 0 to 255.
Technically speaking, a photograph is nothing more than an array of numbers. This is the natural form with which a computer stores, interprets, and acts upon a data set. Yet, to high order probabilistic processors such as ourselves, the information must be transformed to a different state for us to be able to holistically interpret it. To us, the raw information is meaningless.
To give a sense of how vast a data set is, consider this: To legibly read all the information in a greyscale photograph at the same resolution of the second image presented here, a 10 MP photo must be enlarged to over 10 by 15 feet (3 by 5 m). Each square inch would contain 400 numbers (or 60 numbers/ square cm), giving a total of ten million numbers. The sheer amount of information is astonishing, yet what is most fascinating is that when we see an image, we see what’s represented: a flower, a person, a house. What’s invisible to us are the individual pieces of data. Consider what length we go through to compress such a vast amount of information into such a small space until we no longer see the “photo-graph”—a graph of light—we see through it.
Revealed is a glimpse into the vastness and complexity of the unseen computations and processing power of some of the tiniest machines. We may not be able to sift through vast stores of data like those represented in an image with the ease that a computer can. But when a computer loads an image, it doesn’t notice the benches and trees from our childhood park or recognize the faces smiling back.
C H R O M A + P R O X I M I T Y
To what extent can we learn color as a language?
If all visual cues are replaced with a single hue along a spectrum from red to violet, where warmer colors are mapped to closer objects and cooler colors to objects far away, and a subject is placed in a dark room and asked to navigate through his environment, how would he adapt? What would it mean to see?
[ ABS plastic, Arduino, sonar sensor, LEDs, polystyrene ]
D E P T H . I L L U S I O N
How do you manipulate perception to craft an experience of illusory impossibilities?
A prototype for a large-scale installation that affords the viewer the ability to engage, interact, and discover the principles underlying the visual phenomena observed. A thorough explanation of my research and design process culminating in this model of the illusion can be found here.
[ paper, plastic, wire, masonite, LEDs, ABS plastic ]
O R G A N I C . C O D E
How do complex forms emerge from a system governed by simple rules?
Using Arduino and a charlieplexed LED matrix, I explored methods of producing the illusion of life within a box—life which changes, reacts, and seems to have a mind of its own. The above piece is a study for a larger, more engaging experience investigating viewer behavior in response to unfamiliar animate forms.
[ wood, LEDs, Arduino, wire, acrylic, window tint ]
T W I S T
Poplar and plywood. Orbital movement gives the impression of a table twisting or folding in space.
—Collaboration with Josh Ezickson—
B E N C H . 1 4
One sheet of plywood. Fourteen pieces. No screws, no nails, no glue. Flat-packs into a 1.5’ by 8’ sheet a little over 2 inches thick.
—Collaboration with Evan Finkle—
Depth Illusion. Mathematical Theory REVISED.
The fact that the previous mathematical model did not work with data collected on the observed perceived distance of the shadow plane pushed me to reinvestigate the geometry at play. I realized that the prior model was incorrect and corrected its mistakes. The two pages of notes above show the derivation of the current model, which is much simpler than the previous theory. The resulting effect of the three three primary variables on the perceived distance of the shadow plane not only work better with observations of the illusion, but make logical sense (working through the geometrical logic is a bit tedious, and will be skipped here).
A working model will allow me to precisely manipulate the distance at which the illusion is perceived, thus providing me the tools to create more effective holograms. The three large 3D graphs show the effect of backdrop depth, slit and strip width, and distance to screen on the perceived distance of the shadow plane, respectively. The axis perpendicular to the grey plane (included for clarity) is perceived distance from 0 to 50 cm. The other axes, though not explicitly labelled on the graphs, are all in cm and are scaled the same with respect to each other, to allow for easier comparison. Each sheet of color represents a different value of the variable being assessed. The two axes parallel to the grey reference plane are the other two of the three primary variables. For example, in the second graph, the red sheet is the perceived distance of the shadow plane for a slit and strip width of .25cm as a function of distance to screen (x-axis) in cm and backdrop depth (y-axis) in cm.
From these graphs, we can conclude that backdrop depth has a very minimal effect on perceived distance of the shadow plane. Thus, there is little to no use in changing backdrop depth to create a convincing hologram. On the other hand, distance to screen and slit and strip width have a great effect on perceived distance. However, distance to screen is something that constantly changes when interacting with the illusion. For this reason, and because it is not something that can be varied across the same or different eyes, it is not a reliable variable to manipulate when creating holograms. Thus, the most useful and reliable variable to change to manipulate the perceived distance of the shadow plane is the slit and strip width.
Another important note is that from observation of the illusion, the shadow plane cannot be perceived beyond approximately 20 cm (this is a very round estimate). If we look at the second graph (of changing slit and strip width) and say that 1 cm (the green sheet) is indeed the most useful and reliable width (assuming prior experiments can be roughly generalized to the populace), then a large scale illusion, regardless of backdrop depth, would result in robust perceived distances around 50 cm to 10 cm in front of the screen (if 20 cm is roughly the maximum robust shadow plane). Do the intersections of these sheets with the grey plane (at zero perceived distance) indicate where the illusion yields?
Depth Illusion: Experiments to determine Intervals of Illusion Constancy.
Rigorous testing of myself under appropriate conditions (complete darkness) yielded a large data set that reveals the illusion’s interval of robustness for my own visual system. For various backdrop depths (ranging from .5 to 8 cm) and slit and strip widths (from .25 to 2 cm), the distance of my eyes to the screen was measured for two phenomena: (1) when the illusion formed and the black vertical shadows false fused upon approaching the illusion while wearing the LED headpiece, and (2) when the illusion yielded and the black vertical shadows separated upon approach, thus breaking up the illusion. The process would proceed as follows: Approach the screen from far away until the illusion forms (measure the distance). Then keep moving closer until the illusion yields and the strips of paper are once again discernible (measure again). I should note that the LED headpiece uses one IL 185 5mm 3V diffused blue LED with 6V of electricity.
The testing apparatus and data table from this experiment are the first two photos above. It is important to note that sometimes, the illusion formed as soon as the screen was visible. In this case, the measurement is denoted in the table with “FOV” (Forms Once Visible). In other cases, the illusion did not yield no matter how close I got to the screen. In these instances, the measurement is denoted with “DNY” (Does Not Yield).
There are two sets of visualizations made to understand the information. The first is the set of two photos on a medium grey background. These represent a three dimensional model of the space of illusion constancy. Difficulty in interpreting such a large amount of information led me to interpret the data separately: once for the boundary of formation and another time for the boundary of dissolution. These visualizations are the above two graphs on white backgrounds and are discussed below.
The graph titled “Distance at Which Illusion Forms for Various Backdrop Depths and Screen Widths” shows the boundaries at which the illusion forms (line between “NO ILLUSION” and “ILLUSION” spaces) for each slit and strip width tested on myself. Yellow dots indicate the point at which the screen became visible if the illusion formed once the screen became visible. Upon observing others interact with the wrap-around panoramic prototype, I realized that the illusion must be robust at a distance at which the viewers can interact with it (when reaching out and trying to touch the nonexistent shadow plane). The goal of this experiment was to determine which set of variables (backdrop depth and slit and strip width) would work best for a large scale installation piece. The grey dashed lines indicate an interval in which the illusion should be constant for the majority of participants to be able to interact with it (1.5 to 2.5 feet accounts for the distance of an arm reaching out). An ideal set of variables would allow the viewer to experience no illusion upon approaching from far away, an illusion at medium to medium-short distances, and no illusion at close to very close distances. The idea behind this is that the participant should be able to experience the formation and dissolution of the illusion because an illusion is no illusion if one cannot see past the smoke and mirrors. The novelty and mystique arise from this sort of double existence—somewhere between the concrete being and the ephemeral intangible.
From the Boundaries of Illusion Formation, we can conclude that a slit and strip width of .25 cm is inappropriate and the most adaptable widths are around .75 to 2 cm.
The graph titled “Distance at Which Illusion Yields for Various Backdrop Depths and Screen Widths” shows the boundaries at which the illusion yields for myself. The goal of this experiment was to determine an appropriate set of variables for which individuals could interact with a robust illusion at 1.5 to 2.5 feet (arm’s reach), but witness dissolution at a distance of approximately 6 to 12 inches or less. The idea behind this is that a good illusion should reveal itself in the presence of deliberate scrutiny (moving up close to it).
From the Boundaries of Illusion Dissolution, we can conclude that at large backdrop depths, 1.5 and 2 cm slit and strip widths are inappropriate choices. Furthermore, widths of .5 and .75 cm maintain a constant illusion even at high backdrop depths, which is non ideal.
Overlapping the results from each of the two graphs results in acceptable widths around 1 cm, which is the precise measurement I used at the outset of my exploration into the depth illusion.
One unanticipated problem with this experiment is that different people may accommodate at different distances than myself. In fact, testing others revealed that the interval of constancy seems to grow with age. Thus, this data cannot be applied to others. However, if I assume that I am on the lower end of constancy intervals, then I can assume the same intervals recorded overlap with most other intervals, especially those of older individuals. However, if I desire the illusion to yield at a certain distance, I must test more people to find an appropriate set of variables that allows interaction with a robust illusion but dissolution upon closer inspection. Another possibility for my comparably small intervals is that I understand how the illusion works and have become so accustomed to it, it naturally yields more easily.
Chroma+Proximity Mapping: First iteration of immersive wearable sculpture experiment replacing depth and layout cues of the physical environment with a spectrum of color filling the visual field. Arduino, 3D printed skeleton, wires, sonar sensor, RGB LEDs.
To what extent and how effectively can we learn the language of color to navigate through an environment if stripped of all depth cues (binocular disparity, accommodation, etc.)? Will a chromatic spectrum supplement tactile information? With only one sensor, how do individuals adapt to obstacles at multiple levels of height (i.e. scanning of surroundings)? These are a few of the questions I hope to explore with a future iteration of the design.
The sensor is mounted in front of the eyes and points directly forward, situated between two smart RGB LEDs facing stretched white 3D printed sheets .4 mm thick covering the visual field of the eyes that act to disperse and diffuse the colored light. The Arduino and power pack are attached to the mount on the back of the head.
When using the headset, room lights would be turned off and the individual wearing the piece would be asked to navigate through an environment, receiving the same color cues delivered to each eye. The sonar sensor measures the distance of the closest objects in a two foot wide path directly in front of it. The sensor has a range of 0 to 6.45 m, and refreshes 10 times every second. A very close object would map to a red light, a very far object would map to a blue/violet light, and all intermediate objects would map to the appropriate hue in the spectrum between red and violet. Objects further than 6.45 m deliver the violet light.
Improvements to this design include creating a more user-friendly and ergonomic frame that is intuitive to use and will not break with repeated use, containing the wires better and using shielded wires to minimize the noise in the sensor readings, and developing a single diffuser across both eyes with a smoother plastic with less irregularities that removes all points of fixation and can fit to a greater variety of shapes of faces.
Stretching and Pressure Forming 3D Printed PLA sheets.
Exploring the properties of 3D printed plastics by shaping them with a heat gun led me to discover a way to form very thin opaque domes for a concurrent project that required a way to disperse light over a field of vision. The process of creating the domes is essentially the same as vacuum forming, except the sheet is pushed over a surface instead of being pulled and wrapped around it.
Three studies follow exploring the surface of constant negative curvature formed by various objects pulling the soft plastic around a leading face. Revealed is the cross-hatching pattern of the plastic strands, a function of the process of forming a strong plastic sheet from fused deposition modeling.