“Whoever wins the personal agent, that’s [going to be] the big thing. Because you will never go to a search site again, you’ll never go to Amazon again” - Bill Gates By dr. Paul Jurcys In the very near future, each of us will our personal AI assistants for every aspect of our lives. Each of us will have our personal AI doctor, personal AI coach, personal AI nutritionist, or digital assistant that will help us organize our calendars, schedule events, and purchase items on our behalf. Such AI-powered assistants will augment our abilities and help get rid of time-consuming tasks, and focus on things that matter most.
Buzz In the Market In the past few months, there has been quite a bit of excitement about the possibility of building truly personal AI:
A massive investment in InflectionAI could be seen as an attempt to stave off any potential competition and secure dominance in this rapidly emerging domain. Shortage of Experts Today, we are unfortunately facing a remarkable shortage of experts. With over 8 billion people on Earth, there are a mere 10 million doctors. Teachers struggle to spend enough time on direct interactions with every student in the class. On a personal level, we often lack someone equally knowledgeable or passionate about our favorite topics. In times of grief, suffering, and emotional pain, the desire for a close companion to provide advice and understanding becomes ever more acute. This is where the transformative power of AI comes in. In the near future, countless individuals, potentially everyone, will be able to augment their personal capabilities with our own personal AI assistants. The trajectory toward universal access to intelligence is clearly mapped out: we can expect that any person with access to the internet and a hand-held device will have access to the same expert advice from a doctor, an educator, or any other specialist, all through their personal AI. 2 Ways to Build Personal AI As technology continues to leap forward at ever-increasing speed, personal AI assistants are evolving to become more sophisticated and versatile. There are two paths of bringing AI superpowers to humans: A top-down approach: after the release of ChatGPT and other generative AI tools, we are witnessing an influx of various personal AI assistants that are supposed to hello us find information, summarize text, or generate images. The top-down approach is rooted in creating software applications - personal AI assistants - that are marketed as tools to help us solve various daily tasks and assignments or offer us a companion, a buddy that is available 24/7. Think of InflectionAI’s Pi, Midjourney, Dalle, and countless others. However, most of our essential tasks are rooted in the physical world: we need to eat, sleep, and and go to the bathroom. As cheap sensors are being integrated in our wearable devices and environments around us, individuals have tools and ability to track personal biological clock - we start living quantified lives. If you are a runner, you probably have a fitness tracker to measure your running distance and heart rate, as well as recovery rate. Also, our cars track hundreds of data points (e.g., acceleration, braking, engine performance, and outside factors). Scientists around the world are building assistive robots to help us to recover from injuries or learn important skills. These sensor-derived data sets also lead to the development of AI-powered applications that help optimize different aspects of our lives. This sensor-data-AI path resembles a bottom-up approach to building personal AI. Data Eats the World, But How to Access Data? AI assistants such as ChatGPT, or PI, are amazing in answering our general questions and performing general tasks. However, if you ask anything personal, something related to yourself, they stumble. Let’s see what responses we get to the question “How did I sleep last night?”: How come!? Isn’t it odd that if you wear Apple watches that track our activity, sleep and process various biometric data, Siri - an AI assistant built by Apple for Apple’s products - is not able to answer such a simple question?! Eventually, every company building personal AI will face the challenge of connecting the personal AI to the user’s own data. To build a truly personal AI, we need not only general knowledge from publicly available sources and research repositories, we also need to bring this general, publicly available knowledge and correlate it with the specific data of each individual. In other words, to build a truly personal AI, Ai assistant must be able to access to the user’s personal data. Otherwise, the AI assistant will be a generalist, not a personal: This is where Prifina’s human-centric approach to data comes into play: with Prifina’s user-held data model, it is possible to bring those personal AI assistants to each individual and run those personal AI assistants “on top of” each user’s own data, privately. In Prifina’s personal data ecosystem, each individual user connects their data from personal data sources to their own data “vault”, where data is collected and unified. AI-powered apps and assistants run in each individual users’ data environment. Each individual user’s data is private by default. A human-centric approach to data opens vast opportunities for the use of personal data: not only personal AI assistants can answer the question about last night’s sleep, but ML and LLMs can offer new possibilities for personalization and automation of tasks and generate new value for individuals. Multi-Agent Interface So how many personal AI assistants can there be? And how many AI assistants can one person handle? As humans, we have limited time and a limited attention span. Our guess is that each of us will be really interacting with 5, 7 or 10 digital agents (similar to a manager at a corporation who has 7-10 directly responding employees). The image below illustrates the universe of digital assistants in the human-centric environment:
These digital agents will also vary in their expertise, capabilities, and proximity to the individual human being. It is likely, that people will have their own AI doctors, AI coaches, and AI-powered shopping assistants who will help to achieve specialized tasks.
Some of them will operate in the “inner orbit” - on top of the user’s own data, privately, while other AI assistants will be generalists, and will not have access to user’s own data. In this multi-agent environment, one possible scenario is that each individual will have one primary and preferred assistant - we can call it “my personal AI.” But how will those digital agents and assistants interact with one another? What technological infrastructure is needed to make them talk to one another? Assume you feel unwell and suffer from an upset stomach, which personal AI assistant do you go first? You might go to your “own private AI” and ask “Why do I feel pain in my stomach”? Your own personal AI may refer you to your personal AI doctor. In this kind of scenario, it will be important to make sure that the initial inquiry made by you to your own personal AI is automatically transmitted to other agents. This will ensure that your AI doctor will already know your concern and will be able to continue the conversation. Such communication between AI-powered agents can only be possible in the human-centric data environment, where all the AI experts run on the user’s side. Sharing sensitive data outside of the user’s own personal data environment is not optimal. In fact, such communication between agents would be impossible in the old, enterprise-centric data ecosystem where data is locked in separate silos. Paths Forward As we stand on the entrance gates to the age of AI, the future seems quite exciting: how will our lives change when each of us will be able to tap into the potential of our Personal AI? The integration of Personal AI agents into our daily lives is not a question of if, but when and how. To bring this vision into reality, a shift in thinking is required. We need to embrace a human-centric data paradigm where our Private AI is operating on our side, on our own data, privately. At Prifina we are building an infrastructure to empower each individual with such “steamengines of the mind”; we want to empower developers to build Personal AI applications for people to augment each individual’s creative potential. Join us!
2 Comments
On July 13, 2023, an event called “Augmenting Consumer Experiences in the Age of Data & AI” took place at the offices of New York Life Insurance Company's offices in San Francisco (kindly hosted by Boudjemaa Nait Kaci). Augmented UX with AI The event started with Jouko Ahvenainen from Prifina, who invited his fellow panelists to delve into their personal experiences, both triumphs and trials, relating to their interaction with the recently released AI and data-driven technologies. Heather Whitney from Morrison Foerster shared her excitement at the launch of ChatGPT. She underscored the versatile potential of generative AI, highlighting its role in revolutionizing customer experiences across a variety of sectors, notably content creation, gaming, as well as gene editing and drug development. Following up, AJ Tomas from Touchdown Ventures shed light on the latest unveiling from Humane Inc. — a secretive startup founded by former Apple executives — who have recently announced the forthcoming release of a cutting-edge gadget that generated quite a buzz in the tech world. Amit Sharma shared his vision: "We believe that generative AI has the potential to create innovative forms of content that go beyond the current video-based experiences on Instagram, YouTube, or Tiktok. These novel experiences will offer users the opportunity to engage with brands and services in a multimodal manner." Aaron Mollin (CEO, Ichijiku) offered a fascinating observation, arguing that most AI-fuelled experiences are curated from a “top-down” perspective. He recognized the transformative nature of tools such as ChatGPT, but called to realize that oftentimes he felt that such tools are detached from how we interact with material objects around us. Aaron shared an alternate viewpoint related to integrating AI-powered insights based on the data collected from ubiquitous sensor technology found in everyday physical objects, like wearable tech, IoT devices, and environmental sensors. He called this the "bottom-up" approach. One such example is a collaboration between his company, Ichijiku, and Prifina: their work has resulted in the creation of sensor-imbued luxury jackets (one of which he was wearing that night). This conversation immediately led to a conversation with audience members about the fact that we, individual consumers, all have cell phones, and we find it cumbersome to interact with hundreds of separate apps and services. From the consumer perspective, rather than multiplying points of interaction, it would be better to reduce our reliance of more and more devices and apps and make the UX more user-centric. Predicting the Future Jouko then asked panelists to share their thoughts on what changes consumers could expect in the next coming years. Amit did not hesitate to suggest that every aspect of consumer experience will change in the next few years. Some UX aspects will change faster than others, but gradually many industries will transform. E-commerce, online shopping, entertainment, and interactive experiences are the first ones to transform. AJ Tomas predicted that another area of fast-moving development is personal productivity. Aaron expressed his wonder about the impact of data on the way how individuals make decisions about their own lives. Lots of work appears to be needed to build personal recommendations and nudges for people. Jouko reiterated his favorite idea about predictions: although we do not have a crystal ball, we can actually make quite certain predictions about the future. Jouko noted that we can anticipate that certain things will happen (e.g., self-driving cars), but the challenge is to identify the timing: will happen in a year? Two years? Five years? Amit noted that the current developments in generative AI capture much attention of billions of people who are waiting to see what the actual breakthrough will actually look like: “And if you go back to the 1980s, people knew there would be something called ‘a computer.’ But nobody knew what a computer would look like … it wasn't clear that it would be a box on your table with something called a keyboard and a mouse… This is true for smartphones and, most recently, Apple VisionPro. We can’t even imagine what Apple VisionPro 14 will look like. And that uncertainty is what makes people excited.” Legal Issues To Be Resolved Heather shared insights based on her hands-on experience from working with generative AI technology companies; she provided an overview of some of the key legal issues that revolve around copyright, the legality of the use of data, and data privacy. These issues are currently unclear and are currently discussed among legal experts and AI entrepreneurs:
Heather and Jouko shared their views about the emotional narrative around these generative AI tools: many artists feel hurt and believe their works have been unlawfully used to train AI without their permission. The ethics of this debate raise controversial questions. On one hand, information in the public domain is supposed to be free. On the other hand, some stakeholders invoke strong arguments based on the ideas of exclusive ownership rights. Ichijiku: Combining Fashion, Cultural Expressions and AIAaron shared his unique perspective on using traditional kimono silk materials to create bespoke fashion items. Differently from the prevailing ideals in the luxury fashion industry, where consumers are made to feel that the luxury fashion items purchased just a few months ago are out of style and invited to purchase the next generation of items, Ichijiku aims to create life-long items that people could hold on to for their entire lives. Ichijiku embeds sensors in those fashion items and uses data to extend the lifetime quality of the garments. By implementing sensors and generating data, Ichijiku jacket owners can monitor explore to humidity, sunlight, and heat; the first, humidity, being the most kind of serious culprit for the degradation of silk. By using data and AI, Ichijiku aims to create a new type of experience that go even beyond the maintenance of one-of-its-kind silk. Additional information collected from other sensors, such as heart rate, could help people get insights into how they feel while wearing Ichijiku’s bespoke jackets. This is something that no one has ever been able to create. “For me, it's just all about, creating pieces that will stand the test of time. … Ichijiku aims to create these unbelievably beautiful, arguably one of the most intrinsically valuable items on the planet. And that I think, is what's going to really appeal to people. Technology will never be the selling point, although it is definitely appealing. We're using vintage silk fabrics, which ties closely to sustainability ideas. People understand just how beautiful the items they are buying. The use of data and technology helps us further the story.” -- Aaron Mollin Post-Event Networking
One of the remarkable features of the data and AI-related events currently happening in San Francisco is that people come to meet one another, learn about each other and what others are building. The same spirit could be felt at our event: attendees spent the remainder of the evening at our venue socializing, getting to know each other, and connecting on various social media platforms to stay in touch in the future. We at Prifina are excited to be at the heart of this historical moment; we feel inspired and motivated to continue building our Human-centric data platform for AI-powered apps and services. |
About PrifinaWe unlock value from personal data, privately. Archives
August 2024
Categories
All
|