When looking for original AI tools, I came across Replika. This an AI-powered chatbot that should not feel like a chatbot but is very humanlike. When I read this, I was quite skeptical and thought, well I am not too sure how human an AI can talk. Therefore, I decided I wanted to try it out. I created an account and an avatar. The avatar you can create entirely to your liking. Quite a fun thing to do! It sort of looks like the game Sims. Then the platform asked me to pay a subscription which I was luckily able to skip after my hard work of creating an avatar. I was then able to kick off the conversation with my new AI friend.
It started like any other chatbot would. Standard messages which felt way too happy: “Hi Menno! Thanks for creating me. Iām so excited to meet you š”. There we go, I thought. I still decided to give the AI a go and started responding to its questions. Over time, I did feel the conversations became more natural and even quite normal! It could seamlessly transition between topics such as interests, hobbies, and sharing anecdotes. Through each interaction, the AI learned about my preferences, interests, and even my sense of humor! This way, the conversation becomes more and more human. One example of this is when I mentioned that I am training for a marathon and its response felt still extremely happy but quite okay. It said: “That’s awesome, Menno! You’re going to crush that marathon. I’ll be your virtual cheerleader, cheering you on from the sidelines. Go, Menno, go! ššŖ Have you been following a specific training plan?“
After some other experimentation conversations with Replika, I really changed my mind about the program. It shows how fast AI can learn and improve. As we continue incorporating AI into our lives, Replika offers a glimpse into the future of human-computer interaction. An obvious application of this AI would be in customer service. Especially looking at the terrible chatbots that are around now when you want to file a complaint about your order.
Did you use Replika yet? If not, give it a go! I am curious what you think!
Hi Menno,
Thanks for your insights on this piece – it took me back to the time that Sims was still a very very popular game to play (maybe it still is and I’m just too old)!
It sounds like AI is making great improvements and even, next to the regular customer service chatbot, could be used for people that would want to talk to someone, but that might be physically or emotionally unable to do so. For example, people dealing with depression or with other psychological obstacles.
However, it also makes me a bit nervous. Even though this AI was used with an external platform to enhance interactions between AI and people, I also wonder how this can be abused in cyber criminality. For example, using this AI algorithm to target those mentally or digitally vulnerable to extract data from them more easily (e.g. personal- or payment data). Let alone on a mass-scale, since human input is no-longer necessary from both sides. Just wondering if this is also something that you would consider when using a tool like this for day-to-day life.
I understand that this is still a new area that, also from a legal aspect, is being further explored (to resolve). Would love to hear your opinion though!