Testing Augmented Virtual Reality
GIF references: Video recordings by H Muzart, mainly in Canterbury (Kent, UK), using an Android smartphone, of me using augmented reality apps and machine learning apps by Google and its affiliates, and by third-parties making programs for Google-based platforms.
Since 2016, my particular focus here is on :
The application and use of, and testing of, these programs (by Google and its affiliates, and by third-parties making programs for Google-based platforms), and reviews of those, and collecting real-life data to refine the tools being used in the first place.
In the future I hope to work on contributing:
To the computational tools themselves.
I have been working on building 3D simulations , and I am also looking into the integrative codified implementation for how those could be implemented in portable mobile augmented reality devices. I am also exploring how to combine these with deep machine learning, even though ML is not explicitly related to augmented/virtual reality.
360 realistic Virtual Reality
These would consist of visual 360-degree photorealistic environments or 360-degree 3D CGI (computer-generated builds), with immersive VR goggles (more info at CognTech BMIs), with possible haptic feedback (via hand sensations). I am interested is making those visual simulations myself, as I have experiment with with 3D designs [link-1, link-2, BNT, SciOrg, HM.info] and having posted photos and videos on Google Maps/Earth (link) [SciOrg, HM.info].
Since 2009, I did work on e-learning with UK schools (in Kent) and universities. In 2012, my second year thesis work with G Campbell at UCL meant I could explore the state of e-Learning in science/medical education. Also see this section. And since 2018, I have been experimenting with these tools more, which consist mainly of using Google-related apps, or apps made using Google-related products. This technology has huge applications in e.g. medical surgery training, airplane pilot training, etc, and also for entertainment in any domain of life. Most importantly, it can be used to further academic education and scientific research in cognitive neuroscience. I am working on moving my own paradigm (in Unity) to VR and AR, but much work needs to be done for that.
3D models in the real world (Augmented Reality) (non-AI)
Models can be superimposed in real-world environments, via one's mobile device camera or wearable glasses (eg. Google Glass, Hololens, etc). For this, I have been revisiting some older files (2012, 2016-2018) I made in AutoDesk, CAD-Works, Unity, Blender, 3DBuilder, SketchUp, and others, and also making new ones, to repurpose them into AR-compatible embeddings.
Information interactivity and applications in the real world (non-AI & AI Augmented Reality)
This essentially uses non-DML and DML applications (see external links for Google Brain, ImageNet, Street View, Informatics) and my own (CognTech/BioNeuroTech) in interacting with the real-world. The hope is to also use neuro-robotics and mobile EEG (CognTech work). This crosses over with my other interests in Google StreetView/Maps/Earth, Data Mining, and DCNN 'AR' Apps.
This has provided me with new ideas to test certain hypotheses by changing variables. In the future, I could help answer these: e.g. How accurate is the person label detection from different distances? Does time of day (an hence lighting luminosity levels) affect the results? What s the precision of object detection for moving objects at speeds x? How does varying the angle of viewing, a, affects the precision of the confidence %age.
Photo album description: H Muzart Testing Augmented Reality & AI tools by third parties (especially ones derived from Google Brain)........ For AI (DCNN) object recognition, AI-based information connectionism, and Linguistic (optical & auditory) processing --> How good are those for certain variable parameters, compared to human brains' intelligence? ....... For Real-world navigation, and non-AI 360deg 3D Models --> How naturalistic and efficient are these tools?