Anybody considering picking up an Oculus Rift..?


Hoping I may be misremembering Steam's ToS, but seems I wasn't. Calling Steam 'free of junk' is just plain wrong.
 
Aaaaand... stepping in to grab some of the recently lost goodwill comes the first (independent, for now!) rival -

http://www.trueplayergear.com/

http://www.gameranx.com/updates/id/21087/article/new-vr-project-true-player-gear-emerges-to-take-the-spotlight/


Very interesting - it's a combination of the best of both the Rift and Sony's Morpheus - it's new twist is onboard cameras to let you snatch a view of your keyboard/controller/room without taking it off (or maybe for AR use too...). It also has the same load-reducing facility as Morpheus, in that it has a box that handles the image pre-distortion, so both conventional HDMI sources will work directly, and - as they put it -

Hardware acceleration Let Totem do the work, not your computer.


Performance is important and every millisecond counts. We do pre-lens distortion and sensor fusion in hardware. What does that mean? We offload work from your computer so you won't have to buy a new one to play your games in VR.
Also features very large lenses (as does Morpheus, glimpsed briefly in GDC videos), and directional sound positioning, again like Morpheus.
 
So much activity -


Valve's Michael Abrash (host of their recent 'What VR could, should and will be..' presentation) has parted with Valve to become Oculus' "Chief Scientist"

http://www.oculusvr.com/blog/introducing-michael-abrash-oculus-chief-scientist/

We're on the cusp of what I think is not The Next Big Platform, but rather simply The Final Platform – the platform to end all platforms – and the path here has been so improbable that I can only shake my head.


(...)


there are half a dozen things that could be done to display panels that would make them better for VR, none of them pie in the sky. However, it's expensive engineering. And, of course, there's also a huge amount of research to do once we reach the limits of current technology, and that's not only expensive, it also requires time and patience – fully tapping the potential of VR will take decades. That's why I've written before that VR wouldn't become truly great until some company stepped up and invested the considerable capital to build the right hardware – and that it wouldn't be clear that it made sense to spend that capital until VR was truly great. I was afraid that that Catch-22 would cause VR to fail to achieve liftoff.


That worry is now gone. Facebook's acquisition of Oculus means that VR is going to happen in all its glory. The resources and long-term commitment that Facebook brings gives Oculus the runway it needs to solve the hard problems of VR – and some of them are hard indeed.
 
Last edited by a moderator:
VR - The platform to end all platforms?! Quick, to the ROFLcopter!

It'll have it's crowd, like 3D cinema/tvs, but it's not the be-all-to-end-all
 
Sooner or later, it will.

Not with something like the Occulus Rift - but with better technology, potentially direct neural connections (isn't stuff like that already used for artificial limbs?), etc.

You should read stuff like Otherland or Neuromancer. If we had technology like this, and I think we will have (at least to some degree), in my lifetime, then this will supersede any other i/o technology.

Imagine having something in between the brain and your senses and muscle control. It could switch off your control of your body and make you feel another (virtual body) and see a virtual world. It could also mix the true input signals with a computer generated graphic, to for example have a kind of hud or use an application while also being aware of the outer world.

I think it's likely that the human brain is capable of connecting to additional "hardware", considering how flexible it is. Pretty sure thought control is doable like this. Maybe even something like true omnidirectional vision, but for that the brain would need to grow some powerful image processing thingy similar to the one for the real eyes, so maybe that will take another few hundred years tops to help the brain to grow that artificially.

Stopping the dreaming, how open is the Rift at the moment? Will I get an SDK with source if I buy one?

I tried registering as a developer, but they want to know about my "project" and stuff.

Edit: Just had to enter a name for that project, now have access to SDK including source. Question answered.
 
Last edited by a moderator:
It'll have it's crowd, like 3D cinema/tvs, but it's not the be-all-to-end-all
Agreed. Kinda Kinect/Eyetoy popular-level maybe. The whole wearable visor style is too obtrusive and gimmicky for total takeover.

The end-all will be what I think T4b mentions above - some sort of direct-feed tech. Implants are obvious solution, though I still think these would be perhaps bit too scary for the masses in the beginning, so maybe some sort of "outside", non-intrusive chip which nevertheless streams data directly to your neural system. 

But eventually it will be implants or neck-sockets. After all the Nip & Tuck scene would be kinda unthinkable half a century ago, now young un`s get silicone breasts  as birthday presents.
 
You will most probably not experience Neuromancer like technology in your lifetime. It's like fusion power or a mechanical heart. All plausible in theory and partially inside a lab, but still far outside the region of affordable engineering. 3D cinema seems a pretty good comparison to this new hype platform to end all platforms.
 
Aaaaand... stepping in to grab some of the recently lost goodwill comes the first (independent, for now!) rival -

http://www.trueplayergear.com/

http://www.gameranx.com/updates/id/21087/article/new-vr-project-true-player-gear-emerges-to-take-the-spotlight/


Very interesting - it's a combination of the best of both the Rift and Sony's Morpheus - it's new twist is onboard cameras to let you snatch a view of your keyboard/controller/room without taking it off (or maybe for AR use too...). It also has the same load-reducing facility as Morpheus, in that it has a box that handles the image pre-distortion, so both conventional HDMI sources will work directly, and - as they put it -

Hardware acceleration Let Totem do the work, not your computer.


Performance is important and every millisecond counts. We do pre-lens distortion and sensor fusion in hardware. What does that mean? We offload work from your computer so you won't have to buy a new one to play your games in VR.
Also features very large lenses (as does Morpheus, glimpsed briefly in GDC videos), and directional sound positioning, again like Morpheus.
Interesting. Maybe I'll finally get to play Doom 2 and Hexen in VR this way sometime without the BS from Facebook and Valve.
 
Last edited by a moderator:
SNES9x VR - complete with CRT playroom -

https://www.youtube.com/embed/d_ccpGmUAP8?feature=oembed

One of the VR ideas I love. Era appropriate playrooms for every system. Recreate your old computer corner, or some Radio Shack you used to skulk about in. I want my ZX Spectrum, cassette recorder and 14" TV like this... I'll also need a virtual kettle to boil while I watch games load.


When someone does a virtual Funspot for Mame, I think I'll need oxygen.
 
Last edited by a moderator:
You will most probably not experience Neuromancer like technology in your lifetime. It's like fusion power or a mechanical heart. All plausible in theory and partially inside a lab, but still far outside the region of affordable engineering. 3D cinema seems a pretty good comparison to this new hype platform to end all platforms.
They still have around 70 or 80 years for it to still be in my live time, so I'm confident. I'm confident on the length of my life too, though 70/80 years would be a bit over the mean. I mean, I have a healthier life style than the mean, don't I? ^^
If Moore's law holds true (or there are newer technologies with similar advancements, as Moore actually only talked about transistors, not computing power) we /will/ have enough computing power for stuff like the AIs in Neuromancer or maybe even the extremely detailed, vast artificial worlds of Otherland.
 
Last edited by a moderator:
Well, 2^50 is about the number of stars in the universe, I think we can rule out Moore's law over such a timespan. But the question is how much computing power do we need for an Otherland (BTW I was quite disappointed by Otherland, would have liked another Dragonbone Chair, perhaps with a slightly more mature hero)?


I don't know, but I cannot help thinking that it requires much more computing power than is neccessary to exceed the rather humble human brain in every aspect. And that is the point where it becomes really interesting and this might happen sooner than we wish.
 
I don't know, but I cannot help thinking that it requires much more computing power than is neccessary to exceed the rather humble human brain in every aspect. And that is the point where it becomes really interesting and this might happen sooner than we wish.
I think current hardware is enough to write a human emulator. It's just a matter of software. That's the hard bit. Emulating a very weird legacy platform without any spec and without any good method to reverse engineer.
 
Human processing and computer processing are totally different, I've read something like the brain was estimated at 100 petaflops of raw processing power in modern computer terms, I thought I remember hearing our best supercomputer grids were only capable of 33 petaflops. But where the difference comes into play, human brains are thousands if not millions of smaller specialized systems depending on your definition of how a "system" could be defined in terms of how it relates to the human mind and body. Where supercomputers are usually just thousands of the same type of system or processor. Each of the systems work independently from the whole but at the same time together with the whole. Being able to do something while thinking of something else entirely, subconscious systems, emotions and thought, verbal language and visual translation turning everything around you into your reality and so so many more. All insanely complex, and things we do instantaneously and without barely even thinking about. Where logic on the other hand takes us extremely long times to do where compared to a computer can do the same or millions of similar operations in a insignificant fraction of a second. I believe the systems are built so entirely different that it would require starting from scratch on the architecture to get them to perform similar functions.

AI has always interested me, but what interests me more is emulating life, everything outside of yourself. How much processing power would it take to emulate all of your perceived surroundings procedurally generated. Actions could be done on relatively single treaded processes I'd imagine. Games are a good example where emulating your environment and surroundings could be potentially replicated if enough work is put into making more complex logic and mature graphics.
 
Last edited by a moderator:
I agree with _wb_ that todays computing power may already be enough to surpass human abilities given the right software (which hopefully won't be available the next 20 years, since my job as a software engineer would be among the easiest to replace :)


OTOH I think that the good old universe will remain the only convincing 'life simulator' this century.


edit: in an attempt to underail this thread, I'd like to point out that the OR currently isn't close to something out of Neuromancer or Otherland. Additionally it isn't even intuitive yet (watch Mr. Schafer & Co being helpless in the Mnemonic part of the AF2014 playthrough on twitch). Most important problem (IMO) is the need for an additional controller for actual interaction with the environment. And this seems to be a hard problem. In the end I'm not convinced that a great 3D effect and head/eye tracking is the most important aspect (or even among the most important aspects) of immersion.
 
Last edited by a moderator:
Human processing and computer processing are totally different, I've read something like the brain was estimated at 100 petaflops of raw processing power in modern computer terms, I thought I remember hearing our best supercomputer grids were only capable of 33 petaflops. But where the difference comes into play, human brains are thousands if not millions of smaller specialized systems depending on your definition of how a "system" could be defined in terms of how it relates to the human mind and body. Where supercomputers are usually just thousands of the same type of system or processor. Each of the systems work independently from the whole but at the same time together with the whole. Being able to do something while thinking of something else entirely, subconscious systems, emotions and thought, verbal language and visual translation turning everything around you into your reality and so so many more. All insanely complex, and things we do instantaneously and without barely even thinking about. Where logic on the other hand takes us extremely long times to do where compared to a computer can do the same or millions of similar operations in a insignificant fraction of a second. I believe the systems are built so entirely different that it would require starting from scratch on the architecture to get them to perform similar functions. AI has always interested me, but what interests me more is emulating life, everything outside of yourself. How much processing power would it take to emulate all of your perceived surroundings procedurally generated. Actions could be done on relatively single treaded processes I'd imagine. Games are a good example where emulating your environment and surroundings could be potentially replicated if enough work is put into making more complex logic and mature graphics.
The main difference between humans' electro-chemical wetware and our current approach to silicon-based hardware is that human brains are massively parallel with a complicated and self-modifying interconnection schema, with a low clock speed and high failure rate (which is compensated by using redundancy and adaptive error correction mechanisms), while human-designed hardware is more sequential in nature and has much higher clock speeds and accuracy. To simulate human brains perfectly down to the molecular level is probably beyond the possibilities of current technology (at least full-speed), but it seems unlikely to me that you have to go that far to obtain a "pixel-perfect" human emulator -- just like you running a circuit simulator is not the most efficient approach to emulating hardware. Even a human emulator with some small imperfections would already be very interesting and would have to be considered to have human intelligence (e.g. it would pass the Turing test effortlessly).
 
Recreating imperfections shouldn't be the goal. But I still think that the architecture at the most basic level is still so different that it would be counter productive. I think if you had a large grid of reprogrammable processors in several larger systems that are hard coded for funtion would be best and better mimic how our brains work at that level. There is large portions of the brain that wouldn't need replicated as its function is more efficiently done in today's embedded systems. Things like temperature regulation has a need, but not as complex as what our brain requires because of the lack of organics. Also entire subsystems that regulate breathing, blood pressure, heart beat, digestion etc aren't needed and could be replaces with adequate cooling voltage regulation. I'm not sure what percentage of the brain is dedicated to the things that wouldn't be needed anymore but I'd imagine its significant. Point being getting it closer is needed, while matching it exactly should not be a goal.

Now the question remains that if such a system could be emulated in software, similar to how throwing processing power as a translation for emulation between one architecture to another. With so many "cores", magnitudes more than the hand full in what we currently emulate, I personally think that it would be processor prohibited to work at the same scale we do. I strongly feel new specialized hardware that is at least similar to how we are setup rather than matching it bit for bit would be the best.

Now the software side you obviously have more experience than I do, but instead of trying to recreate every subroutine that makes up an adult or even toddler human's mind would be quite the monumental task. But taking another nod at biology create a system that is almost entirely a learning machine. One that can even program parts of its processing to more efficiently process input. It should also have a set of hard coded rules that is mostly low level function and describes to the whole system how to communicate with the other systems at a general level, how to actually program its reprogrammable processors to better receive and control its self so it doesn't constantly crash its self unrecoverably trying to make routines. how to receive input but nothing coded on what to do with it except how to store and compare information. Leave that last portion up to the machines construct of reality as to what is actual garbage information. Throwing out garbage or non logical comparison information, try to form a construct of the reality outside it given the input its given. Rules on survival and learning for adapting for self improvement as that is also how we are built. A mix of hard coded "instinctual" memory along with a vast ability to learn and construct patterns of input. That is what I feel would be the most successful in recreating intelligence, artificial intelligence would I guess wouldn't be appropriate if you the system is coded to actually be intelligent.

I've always wished to make a learning program, and as a proof of concept I may just do it someday at a much smaller scale, but to do it right I think a system as I was describing would be ideal for the end goal.

We can split this discussion off if needed, its getting more and more off topic.

Edit: interestingly enough, nvidia very recently had a conference that exactly in line with this conversation. It is quite the coincidence actually. Very interesting watch and dwarfs the scale of the problem I had imagined, we are very far off from full scale hardware capability wise.

https://www.youtube.com/embed/37Yt41ouaNM?feature=oembed
 
Last edited by a moderator:
The problem on the software side is that we're not good at designing algorithms with emergent behavior, like Conway's game of life or artificial neural networks. We basically discover such things through trial and error and some luck, not through engineering-style design. It's too hard to predict or have an intuition about what the emergent behavior will be -- it's like trying to guess the physical shape and the social behavior of an animal based on its DNA sequence. Very simple rules can lead to very complex emergent behavior. My guess is that human brains work like that: I conjecture that they can be described with relatively simple probabilistic rules, executed in a massively parallel way, leading to the complex emergent behavior we call consciousness or intelligence.
 
Back
Top