The Current State of Product Recommendations and Research
I have been toying with the idea of picking up a standalone camera for a while. I would like a camera that anyone in the family could use, without the need to ask for Ashley’s or my phone. However, I don’t want it to be a complicated device to use. It needs to be simple enough for the kids to pickup and use without a lot of fuss. I want something more on the “point and shoot” end of the spectrum. I have seen some promote the use of film cameras as a way to avoid the over production of photos and to be a bit more thoughtful in the photos we choose to take. Since photos have a cost, the theory goes, is this moment worth capturing? Are you taking the time to compose the shot the way you want? I’ll be honest, I haven’t used a film camera in ages. However, I’m definitely at the right age / demographic to get caught up in analog nostalgia though.
Without much experience in this area and not really knowing what worthwhile options are available, I used my search engine of choice, Duck Duck Go, to find options for “best starter film camera”. One of the first options in the results (that wasn’t an ad or the AI generated response) was a site that I didn’t recognize. This isn’t uncommon though; there are so many websites out there and many are content farms, abusing SEO to harvest ad revenue. I’m fairly used to seeing dubious options in my results. I tapped through to look at the article, “Best Film Cameras for Beginners 2025”. I was alarmed to see emoji section headers. Before I even started reading, I knew this was an LLM generated article, likely infused with affiliate links. Bleh. I knew this was possible and likely, but I was still not prepared for how I would receive this eventuality this morning.
A few months back, Ashley and I were discussing web search, LLM responses, and trust over lunch. She initially dismissed the idea of knowingly reading or using LLM responses for research. I postulated that we would, soon, have enough LLM generated content that we would no longer be able to assume that our search engine results led to pages with content written by a human. I then asked her why she trusts websites in search engine results? What makes the author anymore of an authoritative source than an LLM model? How can we verify that people are who they say they are? How can we validate their credentials? The concept of what we trust and why was heavy. The idea that we can no longer trust gave us both an existential crisis. I thought we’d probably have a few more years before it got too bad. I’m beginning to think I was too optimistic.