AI chatbot "memory" is borderline deceptive design (nee a "dark pattern"), but confusingly; with memory AI wouldn't be very useful! Understanding that AI memory is an illusion is essential for getting the most out of these tools and materials (and avoiding falling into the trap of thinking ChatGPT knows you and is your friend). On the second Sunday edition of Your AI Guide, I discuss AI memory (or rather the lack thereof), how it works, and why AI systems not having the ability to remember or learn is what makes them useful. 👉 See the full breakdown in my newsletter, Your AI Guide: https://www.linkedin.com/newsletters/your-ai-guide-7322614464896192512/ Do you have memories of AI chat memory problems or solutions? Share them in the comments, or better yet in a post or video using the hashtag #30DaysofAI. NB: Save this post for when someone tells you ChatGPT knows more about them than they know about themselves! See you tomorrow for week 3 of this 30-day project! #chatgpt #claude #gemini #memory
I spent most of my Sunday building out automation chains with AI, and all I got for it was AMAZING AUTOMATIONS THAT WILL MAKE MY WORK EASIER! Throw caution to the wind and try your hand at building automations for yourself. We create technologies to extend our capabilities, and the only way that happens is if we actually try! You have my permission: Go cobble together things that don't work, then figure out what went wrong and make it right. And when you get totally stuck, reach out to me and I'll help you! Week 3 of #30DaysofAI is on! Let's do this, together!
Understand how AI works to spot the AI hype. Example: "Reasoning" models produce PhD level content because they were trained by PhDs! In a recent interview on his brother's podcast, #OpenAI CEO #SamAltman made some bold claims about "reasoning" models producing "PhD-level" science and soon being able to do novel research and science. From the way he tells the story, you'd think the models developed this ability on their own. The reasoning is far less flashy, and far more human: OpenAI and other AI companies employ researchers in a wide range of scientific fields to produce training materials and do the actual reiniforcement training of these models so they are able to mimic PhD-level research. In other words, as with text and image generation, AI-powered "science generation" is the models mimicking the patterns and language of whatever materials they were built from. This also explains why these new science-based "reasoning" models perform worse at common tasks: They are hyper-specialized to extrude science-type synthetic language so the output biases in those directions. I'm sure Emily M. Bender and Alex Hanna, Ph.D. have a LOT to say on this topic, and I'm sure you do too. How do you cut through the AI hype, and what do you think of AI companies employing scientists to train models instead of doing actual science? Join the conversation by leaving a comment or making a post a video. And follow me for more grounded takes on high-flying AI hype! #mortenexplains