Start-up company Archetype AI aims to redefine human interaction with houses, cars and factories, by enabling users to "speak" directly to objects through the advanced application of artificial intelligence. Ignited by the idea of transforming enormous amounts of sensory data into interactive, tacit text-based conversations - akin to a more worldly-informed version of ChatGPT - this ambitious venture kick-starts a new age of engagement between humans and the physical world. Archetype AI's system is founded on a model named "Newton", which aims to process and translate data from various types of sensors into responses, charts, code and other forms of communicative output that reveal what's happening around the globe in real time. Interestingly, the name "Newton" wouldn't only invoke the memory of Apple's defunct handheld created in 1997, but symbolizes a milestone timeline in the journey of its founder and CEO, Ivan Poupyrev. A Soviet-born software engineer, Poupyrev was deeply motivated in his youth by the book "Hackers". This fascination with the idea of transforming the world through technology was a crucial factor in his professional journey, which included working with some of the biggest names in tech like Sony, Disney, and Google. It was during his tenure at Google's ATAP division where Poupyrev spearheaded the Soli project, that the possibility of enhancing sensory data through machine learning really came to the forefront. The Soli project involved incorporating small radar devices into wearable technology, enabling them to respond to human gestures. However, the complexity involved in interpreting sensory data became an obstacle. The bloom of Large Language Models (LLMs) sparked a solution. Poupyrev and his colleagues envisaged that with some modifications, these LLMs could convert vast, complex sensory data into far more powerful, comprehensive insights about human behavior. This novel concept was so promising that the team decided to step out of Google and establish their own company, Archetype AI, backed by a $13 million seed round of investment. Now, the team, which consists of Brandon Barbello (COO), Nick Gillian, Jaime Lien, and Leonardo Giusti, envision an AI model that not just communicates through texts, but one that understands and translates complex physical data into a comprehensible language. They envisage a world where AI can understand sensor data, thereby enabling humans to fully perceive, interpret, and solve problems that are currently beyond our grasp. Demonstrations of Newton's capabilities suggest a range of possibilities. Imagine an Amazon package, embedded with a motion sensor, alerting you in real time about potential damage during transit. Or a factory where the myriad data from various sensors is translated into an easy-to-understand conversation, thus creating a real-time digital mirror of the entire facility. Certainly, Archetype AI’s approach has gained the attention of investment heavyweights like Amazon and the venture capital firm Venrock, with potential applications in optimizing logistics, healthcare and automotive industries. Yet, as with any new technology moving into uncharted territory, there are concerns about privacy and surveillance. In response, the team at Archetype AI maintains that their use of sensor data has privacy considerations at its core. Unlike cameras, radar and other sensor data can depict behavior patterns while preserving identities. The focus is on deploying this technology to solve specific problems, rather than an all-seeing, invasive monitoring system. With the rapid advancements in AI and sensory technologies, the ‘talk to a house or chat with a factory’ future as imagined by Archetype AI could well be within reach. A world where we can engage in a meaningful conversation with the physical world around us, and potentially solve problems on a large scale using artificial intelligence.
Real-Time Reality: Archetype AI Offers a New Era of Sensor-Driven Understanding
OpenAI co-founder Sam Altman and technologist Alex Blania have introduced a new iteration of Worldcoin, now rebranded to World Network. The novel tech uses an eye-scanning Orb, powered by Nvidia, to generate unique tokens confirming human identity
Oct 23, 2024
—
2 min read