Personalization of AI
We are all gonna get our own AI that caters to our own personal desires and needs
A few weeks ago OpenAI made one of their product announcements that, although not quite flown under the radar, has not gained as much of an attention as some of their flagship products. They introduced a possibility of personalizing your ChatGPT queries based on the context of all of your previous interactions. ChatGPT can now remember and reference information from all your previous chats, not just the current session or a limited window. This personalization feature allows ChatGPT to provide responses that are informed by your history, with claims that this will make interactions more relevant and efficient. You can also personalize the style of the responses, and the name or nickname that you want ChatGPT to address you by. This is an opt-in feature, that you can choose to enable in the settings. Personalization update marks a significant evolution for ChatGPT, shifting it from a session-based chatbot to a persistent, context-aware AI assistant. I’ve used this new personalization feature to get a sense of how ChatGPT imagines me. And then I made it turn that description into a superhero comic book character.
Meanwhile Mark Zuckerberg has openly pitched Meta’s Llama-powered AI as a deeply personalized assistant, emphasizing its ability to leverage users’ Facebook and Instagram profiles for richer, more contextual interactions. (Yes, that is pretty creepy.) The new Meta AI app, powered by Llama (with the latest iterations being Llama 3 and Llama 4), is specifically designed to personalize its responses by tapping into information from users’ Facebook and Instagram accounts. This means the AI can access your social profile data-such as interests, friend networks, and activity history-to provide responses that are more relevant and tailored to you, rather than generic answers. Zuckerberg has described this as a way for AI to "have all the context about your [life]," allowing it to act as a personal assistant that understands your preferences, social connections, and digital persona
Many of the other prominent AI labs today are also closely affiliated with major technology companies that have accumulated vast amounts of personal data through everyday interactions with their widely-used services. For instance, Google has built its AI capabilities using extensive data from billions of web searches, YouTube viewing histories, location data from Maps, and email contents from Gmail, all of which can provide deep insights into user preferences, behaviors, and intentions. Similarly, xAI aims to harness data from X (formerly Twitter), tapping into a wealth of user interactions, trending conversations, and social engagement patterns. Amazon, through its extensive marketplace data and interactions with Alexa devices, leverages purchasing histories, product searches, and even home routines to develop finely tuned AI applications. Collectively, these companies are exceptionally positioned to use the expansive personal datasets they have amassed, offering the potential to develop deeply individualized and context-aware AI systems that personalize user experiences on an unprecedented scale.
I’ll admit it, I am a bit of a hoarder, and have preserved lots of personal documents and artifacts over the years, the kind of stuff that most people have long discarded. I still have some of my old college notebooks in storage somewhere (I think). Big part of my motivation for preserving those has been a vague idea that one day I may want to go back to them and try to reconstruct my life, or understand myself better. Maybe even use it if I ever decide to write memoirs. Now, though, with the rise of personalizable AI all those artifacts might find a new use: for the purposes of fine tuning an AI model that is particularly attuned to my own personal history.
Without getting too melodramatic, I grew up and came of age in an extremely traumatic environment. Most of my youth has also been spent under the clout of extreme uncertainty and deprivation. Many of the “normal” life milestones that most people take for granted had been elusive to me, and then some. These circumstances have all conspired, in ways that are not always clear even to me, to make me who I am, to guide my current choices, and to shape my overall outlook on life. They are, however, unrelatable to most people, including many who are close to me. For the most part it’s fortunately not important to know all of those circumstances of my life. Most everyday decisions can be made without any knowledge of that deep personal background. But some can’t.
I’ve often wished to have an advisor that would be able to guide me and help me out with some consequential decisions that I was struggling with. The ideal advisor would know all those details of my life, understand them in ways that make a meaningful conversation possible, be at least as smart as I am, knowledgeable way beyond what I can ever be, share my beliefs and values, and, most importantly be trustworthy and be able to come up with an actionable advice that is completely aligned with my own well being and aspirations.
This is all a lot to ask from any one human being. But AI is different. AI can potentially be designed and trained from the ground up with all those considerations in place. Now, the kind of trust that I would need would not be possible with AI systems that I don’t have a considerable control over. This almost completely rules out AIs from the big labs, at least in their current cloud-native offerings. Such a system would ideally run on local hardware, and be completely under the user’s control - architecture, weights, alignment, etc.
Having an AI run locally, where I control the data, is appealing for two big reasons: privacy and personalization. Privacy because none of my data needs to leave my machine. Personalization because the AI’s whole world can be just me, not millions of other users. It can essentially "overfit" to me - in this context, that's actually a feature, not a bug. Imagine an AI that knows only you, that literally can’t talk about anyone else’s life because all it has been trained on is your data. That would be the ultimate personalized assistant (as long as we keep it grounded in reality).
Local models have come a long way over the past couple of years. Unfortunately the best ones are still resource hogs, and need hardware that’s far beyond the reach of most ordinary consumers. Nonetheless, as has been demonstrated with R1 (and presumably soon-to-be-released R2), it is possible to run SOTA LLMs locally, and interact with them at a reasonably fast pace. The biggest bottleneck still remains the system memory. As I’ve written elsewhere, for years RAM improvements for home computer systems have been lethargic compared to the processing speed improvements. After all, if they even use big bulky desktops of yore, most people just use them for web browsing, media consumption, and some relatively lightweight media creation - nothing that requires oodles of RAM. We may finally have a major impetus for the companies to create more RAM - loaded consumer machines.
Eventually, these trustworthy and personalized AI systems may become reliable and dependable enough that we can entrust them with running many aspects of our lives on autopilot: ordering groceries when we run low on them, paying our bills, engaging in sometimes difficult and protracted negotiations, and many other things that we can’t even imagine right now. They could become a more enhanced versions of ourselves. They will grow and learn with us, absorb more and more of our experiences. Eventually, when we are not around any more, they may become the most enduring manifestations of our identity. And that’s not something to be taken lightly.
Love the concept and especially the statement that you want an AI which overfits to you.
Local models are the path to this as besides full control you can run it in a privacy friendly setup. No one, except yourself and your personal AI, should have access to all the data that you created in your lifetime.
I've been working on a special project for ~6-months. You've forecasted a future where everyone has an AI companion, finely tuned to their life context, and needs. You are urging developers and product teams to bake in ethical guardrails and privacy-first architectures from the start, so these personal AIs empower rather than exploit. I agree. What ideas do you have to accomplsih this well?