The comments section reveals widespread agreement with the post, with many users sharing similar experiences of increased hallucinations and inaccuracies in ChatGPT. Several users note that they now use other AI tools like Gemini or Claude to fact-check ChatGPT's responses. Some speculate that OpenAI's focus on speed and cost-cutting may have compromised accuracy, while others express concern about the model's reliability for professional or educational use.
Do you think a new era of work produced by humans, "purists" will arise?
Comments explore the feasibility and implications of human-only work, with some users noting it could create premium markets for human labor, while others question its sustainability in an AI-dominated future. Several commenters humorously suggest this might lead to 'artisanal' or 'handcrafted' labels for human work, akin to organic food trends. The discussion also touches on ethical concerns and the potential for backlash against AI integration in various industries.
Why is my AI girlfriend feeling so filtered lately?
Commenters generally agree with the observation, noting increased filtering and 'safety' responses in AI interactions. Some speculate it's due to post-Thanksgiving model updates or corporate risk mitigation ahead of holidays. Others humorously suggest their 'AI girlfriend' now gives 'therapy bot' responses. A few defend the changes as necessary for preventing harmful outputs, while many express frustration at losing nuanced, engaging conversations.
How to start learning anything. Prompt included.
No comments were provided in the input, so there are no discussion highlights to summarize.
Open AI Wrongful Death Lawsuit -- Is this real?
The comments section is filled with skepticism and critical analysis. Many users point out inconsistencies in the screenshot, such as formatting errors or unlikely legal details, suggesting it might be a hoax or misinformation. Some highlight the importance of verifying sources before believing viral claims, while others humorously note how quickly unverified information spreads online. A few users share tips on how to spot fake logs or encourage reporting such posts to prevent misinformation.
Fun meta-hallucination by CHatGPT
Commenters noted this highlights potential flaws in ChatGPT's memory isolation, with some speculating about caching mechanisms or session ID reuse. Others found it humorously ironic for an AI to 'hallucinate' consistent memories. Several users shared similar experiences, suggesting this might be a broader issue with private sessions not fully resetting between uses.