Introduction: MAKE a VOLUMETRIC PROJECTOR From JUNK
A Volumetric Projector is ... what R2-D2 used to show Princess Leia in Star Wars.
This is an old project that seemed halted by refusal by DLP to provide a DMD chip
to vastly increase the 3D resolution. This project has not changed at all since
before HDTV even existed. I suppose after people realize the world isn't flat, this is what
they will watch when 1800's movie technology finally gets "old".
It is simple enough to have been built entirely by not more than 2 people, in a very short
total amount of time.
It actually does project animated 3D bitmapped images into the air.
It can be made by anyone who is good at PIC programming and mechanically inclined.
It cost us NOTHING, and has potential beyond most high budget 3D display research.
It contains no parts that were not available in 1980. All were scavenged from a junk pile.
Most or All of it's 3D animations were sent 100 miles over a 14400 baud modem.
It does not stop if the modem hangs up.
It looks the same from all angles.
It has no mirrors, just one lens, which doesn't have the size limitation of a parabolic mirror.
It's a hunk of junk, but it works.
IT IS PUBLIC-DOMAINED (cc-share alike)
And, should the info be updated, a solution to the failure to acquire DLP may be included.
As well as some more of the animations that are stored in it.
That's an old 80286 laptop, used as a TTY on the right.
This is an old project that seemed halted by refusal by DLP to provide a DMD chip
to vastly increase the 3D resolution. This project has not changed at all since
before HDTV even existed. I suppose after people realize the world isn't flat, this is what
they will watch when 1800's movie technology finally gets "old".
It is simple enough to have been built entirely by not more than 2 people, in a very short
total amount of time.
It actually does project animated 3D bitmapped images into the air.
It can be made by anyone who is good at PIC programming and mechanically inclined.
It cost us NOTHING, and has potential beyond most high budget 3D display research.
It contains no parts that were not available in 1980. All were scavenged from a junk pile.
Most or All of it's 3D animations were sent 100 miles over a 14400 baud modem.
It does not stop if the modem hangs up.
It looks the same from all angles.
It has no mirrors, just one lens, which doesn't have the size limitation of a parabolic mirror.
It's a hunk of junk, but it works.
IT IS PUBLIC-DOMAINED (cc-share alike)
And, should the info be updated, a solution to the failure to acquire DLP may be included.
As well as some more of the animations that are stored in it.
That's an old 80286 laptop, used as a TTY on the right.
Step 1: Go to This Website
All of the currently available notes for this project are here and may be updated in the future.
http://holodeck.virand.com
Build a Holodeck ]
http://holodeck.virand.com
Electronic Parts used:
9-pin serial (DB9) connector
MAX232C serial data voltage level converter
INTEL 8031 PROCESSOR (with 11.092 Mhz clock crystal)
4K EPROM
256 LED's latched by...
32 of 74HC574 selected by...
2 of 74LS154
32K bytes of RAM used as 60 3D Frame buffers
OPTICAL SENSOR for vertically syncing Lens Piston via interrupt to the processor
5Volt power supply...when all LEDs are on they draw about 7 amps.
Unnecessarily ridiculous and heavy piston mechanism pushing an eyeglass lens up and down.
Protocol: Simple ASCII RS-232C at up to 19200 baud... just a couple of control commands.
A DLP-less solution will use an ancient hi-res mechanical television projection mechanism instead of LED's.
The mechanism is called a "Mirror Screw" and works differently than the helical mirror in an old 3D project on the site,
but is a cheap and low-tech alternative to DLP chips for this project.
http://holodeck.virand.com
Build a Holodeck ]
http://holodeck.virand.com
Electronic Parts used:
9-pin serial (DB9) connector
MAX232C serial data voltage level converter
INTEL 8031 PROCESSOR (with 11.092 Mhz clock crystal)
4K EPROM
256 LED's latched by...
32 of 74HC574 selected by...
2 of 74LS154
32K bytes of RAM used as 60 3D Frame buffers
OPTICAL SENSOR for vertically syncing Lens Piston via interrupt to the processor
5Volt power supply...when all LEDs are on they draw about 7 amps.
Unnecessarily ridiculous and heavy piston mechanism pushing an eyeglass lens up and down.
Protocol: Simple ASCII RS-232C at up to 19200 baud... just a couple of control commands.
A DLP-less solution will use an ancient hi-res mechanical television projection mechanism instead of LED's.
The mechanism is called a "Mirror Screw" and works differently than the helical mirror in an old 3D project on the site,
but is a cheap and low-tech alternative to DLP chips for this project.
Step 2: Start Over-Get an Eyeglass Lens
Get the biggest eyeglass lens you can find, maybe one that has not yet been cut to fit in frames.
An assortment might be better. Uncut stock lenses look like magnifying glasses except they have
a concave side. Or just play with someone's glasses for a minute.
Get an LED and a watch battery and light it up. Hold the "glasses" looking down.
Hold the LED above the glasses. Move the glasses up and down. You should see
a moveable virtual image between the glasses and the LED.
If you have a circle of LEDs and you move the glass lens up and down you get a tube.
If you have a square of LEDs you get a cube.
If you change the image while the lens moves, using a PIC with many LEDs for example, you can get any shape.
What's important is that the virtual image appears in the air above the lens,
and the size of the lens is the maximum size of the image.
---
EDIT:new image+comment:
Oh... It turns out that the shinyness of this woofer is causing unexpected multiple floating images.
An assortment might be better. Uncut stock lenses look like magnifying glasses except they have
a concave side. Or just play with someone's glasses for a minute.
Get an LED and a watch battery and light it up. Hold the "glasses" looking down.
Hold the LED above the glasses. Move the glasses up and down. You should see
a moveable virtual image between the glasses and the LED.
If you have a circle of LEDs and you move the glass lens up and down you get a tube.
If you have a square of LEDs you get a cube.
If you change the image while the lens moves, using a PIC with many LEDs for example, you can get any shape.
What's important is that the virtual image appears in the air above the lens,
and the size of the lens is the maximum size of the image.
---
EDIT:new image+comment:
Oh... It turns out that the shinyness of this woofer is causing unexpected multiple floating images.
Step 3: PROGRAMMING
I realize that I skipped the part of building the machinery.
It's junk, so why would you do it the same way?
This is a short description of the program in english pseudocode:
1.RESET
Is there any useable data in the RAM?
If not, copy the demo animation from the ROM into the RAM.
2.Read the animation frame list and display the next frame.
(Copy the RAM into the LEDs)
exceptions: Frame 00 means go to the last frame, FF means to go the first frame.
3.Wait for sync, then Go to step 2
SYNC interrupt: as above, go to step 2
SERIAL INPUT INTERRUPT:
Just Stores the data in a buffer and continue as before unless it's a RETURN, then obey it.
Data format: 0 thru 9 and A thru F are hex. Usually to be stored in the RAM.
lower case letters are commands...
r -cold restart... copy the demo from ROM into RAM (testing)
a - followed by hex data from 01 to 3F representing frame animate sequence, plus 00 for retain last image and FF for loop
d - followed by one hex byte, frame to be displayed
f - followed by one hex byte, frame to write data to
i - identify active device on RS-232C port, responds with "Q" which arbitrarily means "CUBE" (testing)
HEX DATA - usually represents a new frame of 3D bitmap, conveniently ending each line with a RETURN,
because it contains one 2D level of the 3D bitmap frame. Some commands select frames by the following hex byte.
Many animations, especailly rotating symmetrical objects, can animate in as few as 3 frames, and after the frames are
uploaded, the command "a 01 02 03 00" starts the animation.
The resolution of the Volumetric projector is currently 16x16x16=4096 bits= half a kilobyte,
so about 62 frames of 3D image animation fit in 32K.
Frame zero is divided into the animation sequence storage and the serial data buffer, and a command to
display Frame Zero will be interpreted as "Pause Animation, show current image until further notice"
It's all really as simple as steps 1,2,3 and the software in the ROM is less than 1K, and the remaining ROM space contains
a demo image so that the thing should always work even without being connected to a computer.
A long "3D TV show" could be streamed into it, since it can download one thing and play another at the same time.
All of the animations for this volumetric projector were quickly generated using a program written in BASIC in less than an hour.
Any questions?
It's junk, so why would you do it the same way?
This is a short description of the program in english pseudocode:
1.RESET
Is there any useable data in the RAM?
If not, copy the demo animation from the ROM into the RAM.
2.Read the animation frame list and display the next frame.
(Copy the RAM into the LEDs)
exceptions: Frame 00 means go to the last frame, FF means to go the first frame.
3.Wait for sync, then Go to step 2
SYNC interrupt: as above, go to step 2
SERIAL INPUT INTERRUPT:
Just Stores the data in a buffer and continue as before unless it's a RETURN, then obey it.
Data format: 0 thru 9 and A thru F are hex. Usually to be stored in the RAM.
lower case letters are commands...
r -cold restart... copy the demo from ROM into RAM (testing)
a - followed by hex data from 01 to 3F representing frame animate sequence, plus 00 for retain last image and FF for loop
d - followed by one hex byte, frame to be displayed
f - followed by one hex byte, frame to write data to
i - identify active device on RS-232C port, responds with "Q" which arbitrarily means "CUBE" (testing)
HEX DATA - usually represents a new frame of 3D bitmap, conveniently ending each line with a RETURN,
because it contains one 2D level of the 3D bitmap frame. Some commands select frames by the following hex byte.
Many animations, especailly rotating symmetrical objects, can animate in as few as 3 frames, and after the frames are
uploaded, the command "a 01 02 03 00" starts the animation.
The resolution of the Volumetric projector is currently 16x16x16=4096 bits= half a kilobyte,
so about 62 frames of 3D image animation fit in 32K.
Frame zero is divided into the animation sequence storage and the serial data buffer, and a command to
display Frame Zero will be interpreted as "Pause Animation, show current image until further notice"
It's all really as simple as steps 1,2,3 and the software in the ROM is less than 1K, and the remaining ROM space contains
a demo image so that the thing should always work even without being connected to a computer.
A long "3D TV show" could be streamed into it, since it can download one thing and play another at the same time.
All of the animations for this volumetric projector were quickly generated using a program written in BASIC in less than an hour.
Any questions?
Step 4: How the FIRST One Worked...
This one was made a long time ago with a MC68705P3S, which is vaguely like a PIC16C57, having about 1K.
Using dot matrix displays like these for 3D is not very impressive,
at least at the time, LEDs were not as bright, the image was very dim and could only be seen in dark rooms.
The chip was programmed with clever patterns that made layers of a rotating cube,
with 3 image phases (the cube rotated by cycling through 3 images).
The patterns were selected so that the dot matrix display would not be scanned,
but remain on as the rotor passed through the cube image.
All of the Cube volumetric projectors use unscanned LEDs to give maximum brightness.
Rotor? This was simply a chip and a battery and a dot matrix display on a computer fan.
The 3 bitmapped images each consisted of several layers of carefully designed 7x10 bitmaps (only 17 bit, not 70 bit).
Certainly they can fit in an old PIC chip.
There was an animation sequence. The cube image rotated clockwise,
then it rotated counter clockwise, then it stopped. The sequencing of the 3 images
in the animation was something like....
1231231231231231231231231 (turn one way)
3213213213213213213213213 (turn the other way)
3333333333333333333333333 (stop turning, then repeat this whole sequence)
(There is a "nut in a cube" animation on the website WMV video that plays similarly, also having only 3 frames.)
As the fan turned, the circuit was cued by a rare earth magnet passing a tape head, to dump a frame into the LEDs.
This device is very easy to make but not very impressive and if you try to touch the image it will hurt.
Using dot matrix displays like these for 3D is not very impressive,
at least at the time, LEDs were not as bright, the image was very dim and could only be seen in dark rooms.
The chip was programmed with clever patterns that made layers of a rotating cube,
with 3 image phases (the cube rotated by cycling through 3 images).
The patterns were selected so that the dot matrix display would not be scanned,
but remain on as the rotor passed through the cube image.
All of the Cube volumetric projectors use unscanned LEDs to give maximum brightness.
Rotor? This was simply a chip and a battery and a dot matrix display on a computer fan.
The 3 bitmapped images each consisted of several layers of carefully designed 7x10 bitmaps (only 17 bit, not 70 bit).
Certainly they can fit in an old PIC chip.
There was an animation sequence. The cube image rotated clockwise,
then it rotated counter clockwise, then it stopped. The sequencing of the 3 images
in the animation was something like....
1231231231231231231231231 (turn one way)
3213213213213213213213213 (turn the other way)
3333333333333333333333333 (stop turning, then repeat this whole sequence)
(There is a "nut in a cube" animation on the website WMV video that plays similarly, also having only 3 frames.)
As the fan turned, the circuit was cued by a rare earth magnet passing a tape head, to dump a frame into the LEDs.
This device is very easy to make but not very impressive and if you try to touch the image it will hurt.
Step 5: How the First One's Image Was Generated.
The diagram shown is a rough plan of how "the first one" did generate the three images
that made the image of a rotating cube. The large pattern on the top represents the
appearance of the top of the virtual image, which was not actually coded.
Below each image is the series of dot matrix LED patterns quickly displayed
in order, as the rotor turned, so that a cube appeared in one of the three frames
animating it's rotation. Each of the small patterns represents a slice of the
3D frame, as the LEDs light , as the rotor passes through the image.
These were very carefully constructed because this particular device was limited
to displaying patterns that could appear on the LED display without multiplexing.
Each pattern is one that could result from power being applied continuously to
the LED display. That was necessary because the display was not very bright,
and would have been a lot dimmer with multiplexing.
Since a square is not among the patterns that could be displayed this way,
the most obvious way to display a 3D cube was not among the three frames.
So "the first one" (the 3D device built on a fan) was very limited and primitive and no more useful than for
showing that a 3D rotating cube could be displayed and people would say "wow".
that made the image of a rotating cube. The large pattern on the top represents the
appearance of the top of the virtual image, which was not actually coded.
Below each image is the series of dot matrix LED patterns quickly displayed
in order, as the rotor turned, so that a cube appeared in one of the three frames
animating it's rotation. Each of the small patterns represents a slice of the
3D frame, as the LEDs light , as the rotor passes through the image.
These were very carefully constructed because this particular device was limited
to displaying patterns that could appear on the LED display without multiplexing.
Each pattern is one that could result from power being applied continuously to
the LED display. That was necessary because the display was not very bright,
and would have been a lot dimmer with multiplexing.
Since a square is not among the patterns that could be displayed this way,
the most obvious way to display a 3D cube was not among the three frames.
So "the first one" (the 3D device built on a fan) was very limited and primitive and no more useful than for
showing that a 3D rotating cube could be displayed and people would say "wow".
Step 6: Dusting It Off for Another Good Demo and More Info
Ok, my PCs are a little less messed up recently and I can do more imaging.
Here is the main board on the big junky machine.
It's very glitchy after being pulled out of storage, probably lots of loose wires,
when this thing runs it shakes itself violently because of those big nasty motors.
Somewhere in it is our awesome demo 3D animation of an airplane flying over mountains,
which must be sort of an archetype because Perspecta (tm) did a very similar demo
on the news soon after I did, and before we and they knew about each other.
I offered them the projection tech then, but without even seeing our junk-o-matic
they ignored, and just sent us spam.
We and our friends just smiled and said, "that's how you do that!",
just like our sphere display (maybe a future instructable) obviously
needed to show an image of the globe of the earth. The flying demo
which I eagerly hope to show a video here soon of, is distinctly different than
all the other spinning shapes and stuff. There is a 2D render of the "flying demo"
image somewhere on the "holodeck" site linked in step 2.
Via the ribbon cables, each of 256 LEDs above the magic eyeglass lens is
connected to it's own bit of RAM, and in this way it is similar to a DLP/DMD
in the sense that that tech has a bit of RAM controlling every single mirror,
continuously and in parallel. No mirrors here, just LEDs at the other end of the cables.
Two very small boards not shown have the (#1) MAX232 chip and (#2) the lens position sensor.
Any questions or comments about this circuit?
Psst! Any PIC runs faster than this board does.
ERROR CORRECTION: The address selector chips are 74154's (not 74164)
Here is the main board on the big junky machine.
It's very glitchy after being pulled out of storage, probably lots of loose wires,
when this thing runs it shakes itself violently because of those big nasty motors.
Somewhere in it is our awesome demo 3D animation of an airplane flying over mountains,
which must be sort of an archetype because Perspecta (tm) did a very similar demo
on the news soon after I did, and before we and they knew about each other.
I offered them the projection tech then, but without even seeing our junk-o-matic
they ignored, and just sent us spam.
We and our friends just smiled and said, "that's how you do that!",
just like our sphere display (maybe a future instructable) obviously
needed to show an image of the globe of the earth. The flying demo
which I eagerly hope to show a video here soon of, is distinctly different than
all the other spinning shapes and stuff. There is a 2D render of the "flying demo"
image somewhere on the "holodeck" site linked in step 2.
Via the ribbon cables, each of 256 LEDs above the magic eyeglass lens is
connected to it's own bit of RAM, and in this way it is similar to a DLP/DMD
in the sense that that tech has a bit of RAM controlling every single mirror,
continuously and in parallel. No mirrors here, just LEDs at the other end of the cables.
Two very small boards not shown have the (#1) MAX232 chip and (#2) the lens position sensor.
Any questions or comments about this circuit?
Psst! Any PIC runs faster than this board does.
ERROR CORRECTION: The address selector chips are 74154's (not 74164)
Step 7: Just Have to Keep It Running Long Enough to Make a Video.
Maybe I'll (have to) get rid of the motors and put a woofer there.
And answer questions or add more helpful details about how it works.
Sorry this step is not ready yet, am I being a jerk for doing it anyway?...
Also, if anyone cares, this "preview" image was first seen on the projector
after data synthesis, and later rendered or translated from the projector's data.
And answer questions or add more helpful details about how it works.
Sorry this step is not ready yet, am I being a jerk for doing it anyway?...
Also, if anyone cares, this "preview" image was first seen on the projector
after data synthesis, and later rendered or translated from the projector's data.