A day late from Le Mans

Happy Sunday Voornaam,

Don't worry - this isn't a cheeky rebrand of Weekender. It's just very useful that today is a public holiday in South Africa. I planned to send this yesterday, but attending Le Mans from Thursday to Sunday was such an overwhelmingly intense (and exhausting) experience that there was simply no way it was happening.

My passion for the event and especially for finally ticking if off my bucket list was contagious it seems, as Dominique Olivier used the opportunity to research the race and learn about how the technology on track lands in the road cars that we use. For those among you who appreciate technological advancement and hopefully a fast car or two, her article and some of my images from the race can be enjoyed here.

Now, you may not be a motorsport fan, but you're at the very least a fan of statistics and interesting stories. Here's one that has nothing whatsoever to do with cars: giraffes are 30 times more likely to get hit by lightning than people are. 

Reading a statistic like that is almost certain to make you picture what a giraffe getting struck by lightning looks like, or to hypothesise about why that may be the case. Is it because they’re so much taller than we are, and therefore closer to the sky? Or because they spend the majority of their time outside?

We don't entirely know for sure, but the small sample size doesn't do the statistic any favours. There are only five well-documented fatal lightning strikes on giraffes between 1996 and 2010. Either way, if you're ever stuck in a lightning storm in a game reserve at any point, you should probably stay close to a giraffe as a decoy. You should also be nervous of any statistics that are based on only a handful of observations.

Read on for the power of a technology review and three examples of how artificial intelligence isn't always very intelligent.

The Finance Ghost (follow on X) | Dominique Olivier (connect on LinkedIn)

To review or not to review

TL:DR: One bad review in the tech world has the potential to sink a $850 million company - and make us all question the point of reading reviews in the first place. 

The power of a bad review can have far-reaching consequences - a fact vividly illustrated by YouTuber Marques Brownlee’s recent critique of Humane’s AI Pin. Brownlee, known for his thorough and honest tech reviews, did not hold back in calling the AI Pin “the worst product” he has ever tried. Now this blunt assessment has ignited a significant debate about the fairness and impact of such negative reviews.

Tech founder Daniel Vassalo was among the vocal critics of Brownlee’s harsh review, suggesting that it was "almost unethical" given Brownlee’s substantial influence. Vassalo pointed out to Business Insider that “the power to crush a company shouldn’t be taken lightly.” His argument hinges on the idea that a reviewer with a massive following, like Brownlee, holds significant sway over public perception and market success.

Humane, the company behind the AI Pin, currently finds itself in a precarious position. Before releasing any products, the company was valued at a staggering $850 million. This high valuation placed considerable expectations on the AI Pin, positioning it as a potential game-changer in the tech world. However, Brownlee’s negative review, coupled with other users' complaints about the device's tendency to overheat, lag and lack certain essential features, painted a different picture.

The $699 AI Pin faces a barrage of criticism, highlighting how even a product with significant backing and hype can falter. Even the best marketing and positive reviews cannot salvage a product if it lacks a compelling use case or is priced too high, as demonstrated by other high-profile products like Apple’s $3,500 Vision Pro. In Brownlee’s case, endorsing a product he genuinely found to be lacking would only diminish his credibility. For a reviewer who has built a reputation on honesty and thoroughness, maintaining that trust with the audience is paramount.

This situation raises an essential question about the role and responsibility of reviewers. Ideally, reviewers should thoroughly test products and provide honest feedback, serving as guides for consumers to make informed decisions. However, this ideal often clashes with the reality of the digital age.

Retailers like Amazon and Yelp constantly battle against fake reviews that people are paid to write. These fraudulent reviews can mislead consumers, making it difficult to distinguish genuine feedback from paid endorsements. On top of that, many influencers and bloggers receive free products, trips, and other perks, which can lead to bias in their reviews. Unlike anonymous reviewers who might have less incentive to skew their opinions, these influencers can feel pressured to provide positive feedback to maintain relationships with brands.

So, are we still reading reviews in 2024? We probably should - but with a healthy dose of skepticism.

 

Le Mans: The race to innovate

With Ferrari no doubt still celebrating yet another win at Le Mans, the time is right to explain how racing drives the technological innovations that end up in road cars. Dominique Olivier dug into the history books to learn more. You can read it here>>>

Dominique's fast facts: Artificial Intelligence is pretty dumb

An assortment of facts that will take you only a minute to read.

  1. Earlier this year, a 6-year-old girl named Brooke Neitzel managed to order a $170 Kidcraft dollhouse and four pounds of cookies through Amazon Alexa, simply by asking Alexa for the products. When San Diego news channel CW6 reported it during a daily morning show, news anchor Jim Patton said, “I love the little girl saying, ‘Alexa, order me a dollhouse”... which of course prompted Alexa devices in multiple viewers’ homes to order dollhouses as well.

  2. ​After attempting to renew his passport, Richard Lee, a 22-year-old Asian man, was turned down by the New Zealand Department of Internal Affairs after its software claimed that his eyes were closed in his picture. The facial recognition software rejected Lee’s photo multiple times, forcing him to contact the department in order to speak to a human and get his new passport validated.

  3. Facebook, which uses AI to translate posts from different languages, made a serious blunder in 2020 when it translated the name of Xi Jinping, the president of China, as “Mr Sh*thole” in English. The error occurred when Facebook translated a post from Burmese, a language that it had limited data on, and used a placeholder name for Xi Jinping that was similar to a vulgar word in English. Awkward.

Disclaimer

Our content is intended to be used and must be used for informational purposes only. You must do your own analysis before executing any investments or strategic decisions, based on your own circumstances. We do not provide personalised recommendations or views as to whether an investment approach or corporate strategy is suited to the needs of a specific individual or entity. You should take independent financial advice from a suitably qualified individual who gives due regard to your personal circumstances.

Whilst every care is taken, we accept no responsibility or liability for any errors or omissions in any of our content.

The views, thoughts and opinions expressed in our content belong solely to the author or quoted individuals and/or entities, and not necessarily to the author's employer, organisation, committee or other group or individual, or any of our affiliates or brand partners.