• After 15+ years, we've made a big change: Android Forums is now Early Bird Club. Learn more here.

Stereoscopic home screen?

Is that true? How does that integrate when you have X pixels shared by two eyes vs. X/2 pixels for each eye above the flicker fusion threshold?

Not sure what you are referring to when you say "flicker fusion threshold." AFAIK, there's no blinking or alternating light with this phone. It's left and right images interlaced vertically, so even vertical stripes are from one image, and odd vertical strips are from the other. And unlike traditional "interlaced" signal, all stripes are on because of the LCD progressive scanning. Add the parallax barrier to make sure the proper vertical stripe lands on the proper eye. No flicker involved.

The parallax barrier is rigid and unmoving. The barrier always appears vertical relative to the orientation of the phone. In the sweet spot, a given pixel is only seen by the left eye, and by definition, the right eye sees the black of the barrier. The next pixel over is seen by the right eye, and the left sees the black of the barrier.

So each eye is only getting half of the resulting picture, and the brain puts it together. But the parallax barrier is not invisible; the black stripe is something we perceive as well. So, all of this data is present in the "video input" to our brain, so we end up perceiving a fully assembled image with depth but also with vertical stripes.

Those vertical stripes effectively reduce the resolution. Just like at the eye doctor: it's easier to identify the letter with both eyes open than with just one. Our ability to resolve detail is actually diminished by the parallax barrier.

In 3D mode, the UI elements (share, delete, etc) appear at the level of the screen because they are zero-parallax images. The problem is, they reside underneath the parallax barrier, so their resolution is unnecessarily cut by half. A lenticular array would solve this problem by not diverting the UI pixels to left and right eyes.

It should be quite easy to tell that the UI buttons look fuzzier in 3D viewing mode than in 2D mode. If they look the same to you, then I'm going to chalk it up to the same reason some people don't see the screen door effect of PenTile screens :) The brain just does TOO good a job blending.
 
Upvote 0
I was hoping for the same feature for this phone. To be able to view all content on the phone in 3D, including all forms of wallpaper and app icons, but I was sadly disappointed of the reality that it's not happening, after the first couple of days. Luckily, I fell in love with the other features of this phone.
 
  • Like
Reactions: danlman
Upvote 0
The backlight is strobing, however I suppose that flicker fusion threshold isn't important here. I gotta stop thinking movies whenever I think 3d, or active shutter glasses in any case.

Please. I see how the interface controls look for the camera in 3d. That only establishes how a 2d image gets scattered, and doesn't give the visual cortex a chance.


The parallax barrier hashes a 2D picture so it's unviewable. That doesn't address my question - does it halve resolution when the eyes and cortex are finished processing a 3D image? I could be wrong, but I don't think it does

I think the cortex uses the perspective difference to resolve depth and it does that by integration:

x/2 + x/2 = x number of dots

The barrier is nothing more than polarization.

I don't see why any of that reduces detail by default.

PS to Canar - look at your camera controls in 3D. To have the interface look right in 3D, you'll need more than just a turn on, you'll need left + right images for everything you see on screen. I'm not even sure if flat user interface stuff could be converted at runtime by a 2d to 3d simulator (what is otherwise called a converter).
 
Upvote 0
does it halve resolution when the eyes and cortex are finished processing a 3D image? I could be wrong, but I don't think it does

I think the cortex uses the perspective difference to resolve depth and it does that by integration:

x/2 + x/2 = x number of dots

The barrier is nothing more than polarization.

I don't see why any of that reduces detail by default.

I don't disagree that the total number of pixels that reaches your brain is still the full qHD, but as I said before, that's not ALL that is reaching your brain. The black of the parallax barrier is getting in there also. So the brain is getting the qHD plus qHD/2 of vertical black lines, which it's incorporating into the final image. quantitatively that means any given pixel is only half there, which makes it harder for the brain to resolve what that point of light is (my eye doctor analogy). qualitatively, the end result is not as detailed.

And I realize this could totally just be me, but when I pinch to zoom a 3D image (which disables the 3D mode), the resulting 2D image is way more revealing. In fact, this is the way I tell if my source images were in focus; under parallax-barrier mode, even a very blurry shot looks sharp. That is a telltale sign that the resolving power of the screen is reduced, at least from our brain's processing standpoint.

Now, if a next-generation parallax barrier can mimic the color of neighboring pixels (kinda like what some broadcasters do to the black bars on 4:3 sources when displayed in 16:9), the resulting assembled image might appear a LOT more solid and detailed. Not sure how feasible that would be, given that the purpose of the barrier is to block light and is therefore subtractive in nature. And in my experience, matching a subtractive color to an additive one is near impossible, since your ambient light source screws up the subtractive color.
 
Upvote 0
You may well be right.

Are you seeing actual black lines when you view via your loupe (mine grew legs and walked away)? I'm expecting the polarization feature of the parallax barrier to be equivalent to a lenticular display when viewed on-axis.

Otherwise - are you suggesting that the parallax barrier is in fact a true temporal interlace and the images are at 2times 30 Hz? If so, then flicker fusion threshold definitely may play some role - even though the images are static, and even though the backlight abides, asking the brain to slow down to that rate would certainly cause issues. (2x one 1/30 second image is 1/15th second + integration time - and I don't recall the corpus callosum switching rate but you want to stay away from that.)

Anyone who's watched native 1080i@60Hz vs 720p@60Hz (native, meaning end-to-end - source to set) can attest to the issues of interlacing, end resolution aside.

I was somehow convinced that the interlacing is purely spatial (hence only half SBS rather that full SBS support) rather than temporal in this case.

As time permits, I'll try the 3D / 2D comparison you suggest, the proof is always in the pudding.

I was following you up until this statement -

under parallax-barrier mode, even a very blurry shot looks sharp
And yet you say it looks sharper when in 2D? :thinking:

PS - Check my timing math, I might out of whack there - was typing and on phone (still am) when I rattled that out.
 
Upvote 0
Viewed under a loupe in 3D mode, the parallax barrier is a series of black stripes.

As far as I can tell, it's solid temporally. No blinking; nothing like shutters. And it's fixed in position and floats above the actual display.

The actual display interlaces the stereo images by alternating vertical strips from left and right images, and these are solid temporally as well. No trickery with blinking between odd and even vertical strips.

For any given pixel, only one eye sees it. the other eye sees the parallax barrier masking that pixel. Hence the brain receives info on the pixel and the barrier for every pixel.

If the source images are blurry, it is less noticeable when viewed in 3D, and I'm suggesting that the blur is masked somewhat because the resulting 3D display cannot resolve the blur as well. Zoom in slightly or toggle 2D mode, and it will be easy to see any blur in the left image (the image used for 2D mode).

Another way to put it: if a 2mp image is slightly out of focus, you will see the blur on a 1080p screen, but not on a 480p screen. The latter simply doesn't have the resolving power to pick up on the blur. Granted if the blur was bad enough, the lower res screen would pick it up.
 
  • Like
Reactions: EarlyMon
Upvote 0
http://androidforums.com/htc-evo-3d/380627-close-up-parallax-barrier.html

yeah, it depends on how close you are from the screen and at what angle you look at it. You'll see different things. In my close-up with the loupe, you can see that the space between the barrier appears wider than 1 pixel.

That space determines where the sweet spot will be. If I'm conceptualizing the barrier correctly, a thinner space means the sweet spot is moved farther away from the screen. A wider space brings it closer. But obviously there are boundaries that cannot be violated, or the parallax is no longer capable of doing its job, which is to make sure for any given pixel, only one eye can see it.


EDIT
I just used my loupe on a red, green, blue, and white image and toggled between 2D and 3D. the spacing between the parallax barrier is just barely wider than 1 pixel.

In landscape mode, the subpixels are horizontal, so an image with mostly red, green, or blue will appear to have horizontal, rectangular pixels. Each subpixel is only ~1/3 of a square. R+G+B = 1/3 + 1/3 + 1/3 = 1 square pixel.

Viewing the white image, it's clear that only one pixel shines through the parallax barrier gaps. Depending on your angle and distance (outside of sweet spot), you might see fractions of adjacent pixels shine through. That's evidence of the parallax barrier not doing its job.
 
Upvote 0
I just looked at some pictures of how the barrier works. I see now that the barrier is above the pixels, so there is a little more than one pixel between each.

Do we know how the barrier turns black? How is it turned on and off, and why can't we always see residuals of it?

as for the width of the barrier I need to think about that. I think it has something to do with distance of sweet spot because when some people watch my phone with glasses they have to hold it much further than I do
 
Upvote 0
I just looked at some pictures of how the barrier works. I see now that the barrier is above the pixels, so there is a little more than one pixel between each.

Do we know how the barrier turns black? How is it turned on and off, and why can't we always see residuals of it?

as for the width of the barrier I need to think about that. I think it has something to do with distance of sweet spot because when some people watch my phone with glasses they have to hold it much further than I do

Unless their glasses are magnifying, they shouldn't need to hold the phone farther away. In my experience, people who struggle to find the sweet spot tend to move the phone away, possibly in an attempt to ease eye strain. The ideal distance of the phone screen should be about 12 inches from your face.

Parallax barrier I believe is just liquid crystal, like an old-school calculator. Under certain circumstances, you CAN see the residuals (just like the calculator when tilted at the right angle). The reason it's hard to see is because of the light from the display. And when the display is off, it appears black, not silver like the calculator. So it masks the contrast of the residuals.

I've noticed that when looking at something white, say google home page, the residuals of the parallax barrier causes a little bit of irridescence. If you gently tilt the screen, you can see color shift in the white (very faint greens and reds).
 
Upvote 0
I honestly just want the option. There has to be some sort of trigger to switch modes to 3d no? Maybe someone will figure out how to turn it on for a home screen app.


I too was hoping that it would at least be an option. If you watch the commercial it kind of misleads you into thinking that the home page is in 3D. I am really hoping that someone is able to create an app to allow it. I think it would be really cool for the live weather page to be displayed in 3D.

On a side note. The Sharp Galapagos 3D that is in Japan that is suppose to come to the US does display 3D wallpaper. Unfortunately it only has one camera on the back so in order to create a 3D image you have to take 2 pictures, which would mean it probably isn't capable to record 3D video. I don't think it will make it to the US with only 1 camera in my opinion.
 
Upvote 0

BEST TECH IN 2023

We've been tracking upcoming products and ranking the best tech since 2007. Thanks for trusting our opinion: we get rewarded through affiliate links that earn us a commission and we invite you to learn more about us.

Smartphones