VR video is a form of content presentation in virtual reality. Many current VR videos are more or less true to us when we watch through VR headsets. Why is shooting quality important? Because the quality of the video affects the viewer's "space immersion." We can see that many VR videos are now facing the dilemma of achieving immersion. The low quality destroys the immersive feelings obtained by the viewers.
The real sense of immersion comes from the perfect combination of narrative and space immersion. When viewers put emotions into stories, it is also the time when “narratological immersion†comes into being. Imagine that you are reading a novel or watching a movie that you really like. When you are completely attracted by the story, you will forget the passage of time. The occurrence of spatial immersion is also a similar effect. When viewers are immersed in the environment of virtual reality, it is the time when spatial immersion plays a role. Surrounded by the surrounding environment and sounds, attracted by fascinating stories, viewers are easily touched. For VR shooters, if they can combine narrative and immersion, it will inevitably bring real virtual reality to the viewer.
There is no easy way to create "narrative immersion." But we can create a sense of space immersion by defining how to capture good video to "cheat" your perception system.
Many VR-tagged content is far from real virtual reality. Some services that provide VR video playback such as Youtube 360 ​​and Facebook 360 allow users to get a 360-degree video content viewing experience through a standard video player. 360-degree video is generally viewed via a web player or Google Cardboard. Holding a phone in front of your eyes may not be worth a few minutes, but who would like to hold a 2-hour movie like this? The emergence of a head-mounted VR helmet solves this problem.
Mobile VR helmets like Gear VR have motion sensors that provide faster feedback and provide a certain immersion experience. There will be more similar products in the VR market in 2016. Desktop-level VR devices like the HTC Vive and Oculus Rift use a PC's GPU to provide 3D rendering and provide more comprehensive motion-sensing capture. Whether it is Gear VR or Oculus Rift, HTC Vive, despite their high resolution, their quality is not satisfactory when playing video.
Therefore, even if a VR head shows good hardware support, low video quality is still not conducive to immersive performance. The key is whether the timing of transferring the picture to the device is correct. Here we take a look at the challenges faced by the most widely used Gear VR on a technical level to achieve a high-quality VR video experience.
VR video content challenges
Since most VR helmets provide high resolution display, the short board comes from the low resolution of the video content itself.
One of the problems: file size. Most of the video experience on the Gear VR is downloaded via VRSE. These files are usually very large. Some of them are over 1GB in size, so loading on the device is slow. This is the first factor that affects the experience.
The second problem: video quality. This is the biggest problem. Even VR video produced by professional means usually shows blur effect. At the time of watching, it gives people the feeling of returning to the 90s, when the computer can only display video in the standard format.
There is no doubt that video capture needs to capture as high a quality as possible. However, the display in the headline does not reach the ideal state. Why? The reason lies in the resolution.
Resolution
Gear VR can support ultra clear playback. In other words, the ultra-clear video screen size has reached 3840x1920 pixels. Video playback is at 30 frames (the higher the resolution, the lower the frame rate). Obviously the Galaxy S6 phone is suitable for playing 4K video. However, this is not the case.
Why we understand this reason better? We need to answer two different questions about resolution. What is the display resolution we see on the Galaxy screen? Let us call it screen resolution. The second question is: How much resolution did we achieve through the display? We call it FOV resolution.
The screen resolution reaches 2560x1440 pixels on the Galaxy S6. So monocular can get 1280x1440 pixels.
Note: The Gear VR eyepiece has slightly reduced image quality, and the actual resolution display should be 1280x1280 pixels. Although the difference is subtle, it is not the exact "Full HD". If there are more than 500 pixels per foot, this will make the picture look very sharp.
Let's go back and say FOV resolution. We already know that Super-clear video has 3840x1920 pixels per frame, but it must be fully embodied in a vertical 360-degree, horizontal 180-degree space. In Gear VR, FOV is 96 degrees - almost 1/4 of 360 degrees. When we look at the picture, we only see the corner of the picture. When we turn the head, the video's visible area will also be updated. The figure below shows the actual FOV area.
For simple calculations, the visible pixel per degree is 10.6667 pixels (1920 pixels/180 degrees), and the multiplication results in a FOV resolution of 1024 pixels (10.667 x 96).
Therefore, the picture quality we see through the head-up display is 1024x1024 pixels, but the eyepiece size display resolution is 1280x1290 pixels. Differences are here, and this is one of the reasons why we come to the picture. The actual display screen and the video screen itself have caused a certain degree of reduction after head-up display processing, which is approximately 20%. But this is not the main reason. The three-dimensional effect is the culprit.
In order to increase immersion, the addition of a three-dimensional effect can be helpful, so that the monocular can see the objects scattered, so that your brain can believe that you are in a depth of field environment. In Gear VR, stereoscopic display can increase the realism. The three-dimensional effect in the screen can create a more realistic sense of space. Some 3D VR games are also heavily applied to stereoscopic effects in order to increase realism.
Ideally, monoculars should get a full-resolution 360 view. This will require us to have a double resolution display in the same dimension.
If the resolution of ultra-clear video is usually 3840x1920, it will require the frame to reach 3830x3840, which is hard for hardware. Instead, we need to limit it to the 3840x1920 frame, so that the eye can see 3840x960 pixels, so that the merger of the two eyes can get a horizontal spherical view, but it must be spread out in the vertical axis to present a complete vision. As shown below:
The result of "packaging" the two frames in the same frame is that the single-eye resolution will be much smaller. A reduction of 20% may not be a lot, but the result of shrinking by 150% (stretching 512 pixels vertically to 1280 pixels) is a significant reduction in resolution. So no matter how the format is traded (vertical or horizontal), half of the resolution will be lost.
Screen blur during video playback
When we were watching the video, this situation (screen blur, pixel loss) was much more serious. The video resolution provided by Youtube 360 ​​and other platforms is lower than 4K, because few viewers can meet the ultra-clear broadcast bandwidth requirements. And because the video is stored on an Internet server and not on a local device, the service itself also reduces the resolution to maintain a smooth playback experience. This is why playing a stereo VR video can be very blurry.
For most current screens (such as the Galaxy S6, and the Oculus Rift), the FOV resolution is much better than the display resolution. Reducing a slightly larger image will make the picture look smoother, and it will also reduce jaggies and noise.
Therefore, a resolution of 1536x1536 FOV can be a good solution. In this case, the FOV resolution will be 17% larger than the display resolution and achieve a better improvement experience in our tests. If we want to cover 1536 pixels in a 96 degree field of view, we need a resolution of 16 pixels per degree per frame. Multiplying can get 360x180 degrees, so the resolution will reach 5760x2880.
As for the three-dimensional effect. Keep in mind that ideally the monocular eye can achieve a full-resolution screen display (a feasible approach is to combine the left and right split screens together). So roughly calculating this means that the original size of the video will be 5760x5760 pixels high. This requires 33 million pixels per frame. Even with the next generation of Gigabit bandwidth, it is simply unthinkable to want to play such a massive amount of video.
Pixvana System
At Pixvana, we think there is a better way to solve the resolution and playback issues. It is possible to play video close to HD resolution and let viewers experience ideal VR video. Facebook is already doing this on its own platform.
We are investigating feasible methods for covering FOVs with high resolution. One of them is to cover the existing FOV with a better resolution image and add extra space for head movements. The perfect FOV coverage can be generated as the viewer's head turns to switch in multiple streams, which has been achieved on the Gear VR, HTC Vive and Oculus Rift. Another approach is to try and pack and decode in a variety of ways based on content.
Pixvana is creating an open system that allows content creators to make beautiful high-resolution VR videos. This system adapts to changes in the head position, changes in bandwidth, and ensures that the viewer can consistently obtain the correct picture quality. Pixvana can run on multiple platforms, whether mobile or PC. With Pixvana, in the future we will be able to play video on Gear VR much better than it is now. VR video will eventually bring us the desired immersive effect.
Ei 96 Transformer,96 Va Transformer,Ac Current Transformer,Ei Transformer,10kva transformer
Guang Er Zhong(Zhaoqing)Electronics Co., Ltd , https://www.cnadaptor.com