We’ve been promised a future filled with autonomous vehicles, Mixed Reality is the third part in the reality trio, and here the key phrase is flexibility. Mixed reality takes the best of augmented and virtual reality to flexibly adapt to the needs and whishes of the user. This means that the user can submerge into a completely different world while still interacting with his/her surroundings when wanted. , , and smart homes. However, to deliver on this, our software needs to be able to easily translate our physical presence into the digital space. From autonomous vehicles ‘seeing’ pedestrians as crude bounding boxes to virtual reality experiences still relying on controllers for basic body tracking, it’s evident that our technology is not yet fully ‘human aware.’
copyright by www.bodylabs.com
Computers need to understand how we’re shaped and the way we move. On June 1st, we’re announcing the launch of SOMA , the first human-aware Artificial Intelligence knows many different definitions, but in general it can be defined as a machine completing complex tasks intelligently, meaning that it mirrors human intelligence and evolves with time. platform that can accurately predict 3D human shape and motion from everyday photos or videos. With SOMA, we look to enable brands and developers to easily capture 3D human shape and motion to power a range of applications. Built on a statistical model
At the core of SOMA is our statistical understanding of 3D human shape that’s trained using thousands of 3D scans and motion data. We use our statistical approach to make a template 3D mesh of the human body that can align to any data source. By first understanding the 3D human body statistically, we can then begin to unravel the complete range of body types and the full constraints of natural human movement. Unlocking the smartphone with convolutional Neural Networks are simplified abstract models of the human brain. Usually they have different layers and many nodes. Each layer receives input on which it carries out simple computations, and passes on the result to the next layer, by the final layer the answer to whatever problem will be produced.
Previously, our statistical approach used emerging 3D scanning or consumer depth sensing technology as primary inputs. By now combining our statistical understanding with convolutional neural networks, we can accurately predict major joints, toes, facial features, and 3D body shape using everyday photos or videos. This unlocks advanced computer vision, 3D modeling, accurate digital body measurements, and markerless motion capture to everyone using the devices we already own: smartphones.
We designed SOMA to run on a backend server (and later, natively on mobile devices) to empower any developer to access SOMA and integrate it within their products and services. Over the summer, we’ll be announcing a series of APIs and SDKs to enable businesses and developers to plug into SOMA. […]