We stumbled upon a post last week on LinkedIn about a new use case for Chat GPT – a check-splitting feature. It was promoted as being “handy if you’re going out this weekend”. In other words, a productivity hack.
AI powered tools and features like this are rolling out nearly every day. Upon closer inspection though, this particular feature reveals itself to be more of a gimmick than a genuine productivity enhancer. And even more concerning – a data harvesting trap.
Welcome to another issue of Secrets of Privacy where we discuss personal privacy related topics and provide practical tips to immediately shield your personal privacy.
If you’re reading this but haven’t yet signed up, join the growing Secrets of Privacy community for free and get our newsletter delivered to your inbox by subscribing here 👇
If you scroll through the responses in that post, most replies approve of the use. No surprise. These days, there’s more to be gained on social media from being an AI booster than an AI skeptic.
But a few replies noted how inefficient this use case is. Writing the prompt takes longer than just asking the server to split the check 50/50. Itemizing in your head is probably not much harder or longer when it’s just two people as was the case here. AI provides no real efficiencies in this case. In fact, this use case harms the overall AI proposition because it’s so gimmicky.
If this was only a gimmick, it would be no big deal. The problem is this feature is (likely) an intentional data harvesting trap. Because of the enormous data harvesting opportunities, promoting gimmicky AI use cases as fun and neat novelties will be a growth industry. All the while, the gimmicks will harvest your data to train models and possibly even sell to third parties. These will be the new weather apps and addictive mobile games that emerged in the early days of the mobile phone boom.
Keep reading with a 7-day free trial
Subscribe to Secrets of Privacy to keep reading this post and get 7 days of free access to the full post archives.