This post is by Justine Butler, Head of Operations at SELF.
I have worked for senior business leaders and entrepreneurs across the globe for several decades. The secret to the success and longevity of my work is learning and understanding the personal preferences of my clientele. Who has a food allergy, the need to work out in a gym each day, or a love of Jimmy Choo footwear and fine Christian Dior fragrances?
Over the years I have used various mediums to record such detail, from 'client profiles' to note files. I’m also fortunate enough to have a great memory and a general interest in human interaction, making it super easy for me to remember a person's likes and dislikes.
It’s fascinating to build a catalogue of information on someone's life, and it allows me to pre-empt various requests. If I know when my client's mother is celebrating her birthday, I can suggest gift ideas based on her personal preferences and book the family's favourite restaurant. If I know my Middle Eastern client's favourite Jimmy Choo walking shoes can only be purchased from a London department store, I can suggest a purchasing pitstop during their next visit to London.
The way I understand a person's preference makes me a highly skilled, trustworthy communicator and a significant part of that person’s life. They see me as a dependable assistant who has a world of lifestyle and business support at my fingertips for them. I'm sure they're also delighted that my data is backed up every few minutes to ensure I never lose any records.
As Head of Operations at SELF, I spend a lot of time reviewing and discussing various AI Assistants. I am fascinated by the way technology is evolving right before our eyes and how AI tools are a digital version of the way I have always worked and are trying to mimic my behaviours. Only last week, I was playing with a new-to-market tool that explicitly states on their website that they offer:
“Up to 1 year of memory retained from interaction"
This statement filled me with horror and a question that begs to be answered: How long should a memory last?
If you speak to an app, how important is it to maintain a form of memory? For me, memory is king. I know that working with a client for one year and understanding their precise likes and dislikes, enables me to be supremely efficient. Then, let’s imagine, after one year and one day, I wake up, and my memory and notes files are all erased, and I have no recollection of anything I have learned and captured.
This would make me unable to carry out my job and useless to all clients.
From my perspective, I pride myself on getting the job done by constantly learning and refining my results until I can deliver solutions with little or no interaction with my clients. But if I knew that each year I was on countdown to losing my 'memory records', I would feel that my existence as an assistant would ultimately be pointless. Imagine all the credibility I had built up with clients, instantly swept away when I start to ask questions that had been answered months before my memory’s expiry date.
I was recently in conversation with a well-respected futurist who stated:
"Currently, AI tech has inbuilt dementia"
I happen to wholeheartedly agree that many AI Digital Assistants appear to have an impaired ability to remember and extract information. In humans, the late stages of dementia can include severe delusions or hallucinations, whilst being unable to distinguish between reality and their own perceptions. The sheer extent of this issue is why researchers at Ohio State University are investigating what they call "catastrophic forgetting”, and why Taryn Plumb at SDX Central claimed "AI Has a Long-Term Memory Problem". In AI today, this is the elephant in the room, and one of the reasons why we’re building SELF so differently.
Last month, I was reviewing an AI tool that had ‘got to know me’ for a while, so I asked it to recommend a spa in the town where I live. The response sent was the details of a boarding school in the town, yet with a description of the spa facilities. At best, this is accidental hallucination, at worst, it’s a catastrophic programming error that can only improve with human feedback – but at what cost? Should individuals sacrifice their private data to teach a Large Language Model (LLM) which pools the information from millions of user's, and that categorically doesn’t have the user’s best interests at heart?
That’s too high a price to pay.
This is why I am honoured to be involved in the design of SELF, an ethical and private, hyper-personal AI Assistant that simplifies your life and gives you time for things that matter. Its actions mimic the way I have always worked and retained information - for all the reasons outlined above.
Currently, in the test phase environment, our technology is being fine-tuned with human virtual assistants aid to ensure every interaction is captured against the user's profile. The preferences will always be remembered, enhanced over time as people's likes and dislikes change….and will never be deleted unless the user chooses to do so.
SELF is designed for life and is the one piece of ground-breaking technology that will never suffer from tech dementia.
Join as a tester today and become part of the Ethical AI revolution https://buff.ly/45F98PK