Jason Bourne (if you don’t know who this is, go see the Bourne Identity right now!) hops onto an Uber using his anonymous account in Croatia. He uses this particular account in Croatia even though it is not encrypted because Croatia’s cyber monitoring sucks. He won’t be caught. He has a lukewarm conversation with the wiry, grey haired driver of that second hand Subara Impreza. Within minutes gets off the cab prematurely as he had a suspicion about this driver. So he gets off and plans to book a new cab. The Uber App prompts him to first rate the driver. ‘Aaargh’ he says. ‘How do I decide the driver’s performance?’ Bourne is feeling really insecure about Interpol chasing him, and is in a hurry, so he gives him a menial ‘3’ rating.
Uber ratings like many consumer metrics when taken directly from a consumer are basically a reflection of how a customer is feeling at that particular. Sure, a great cab ride may make Jason Bourne happy. But he’s got other troubles on his mind – the FBI and Interpol are chasing him. He isn’t feeling particularly jolly. He’s bound to not give a 5 star rating. I personally try to give a 5 star rating if everything goes well – that involves safe driving, decent enough courtesy, driver wearing a seatbelt, not blaring their music in the backseat. Most cabs actually do satisfy these criteria and I give them a 5. Sometimes however, I have noticed when I’m just pissed off with traffic or in a bad mood that I have this subconscious inkling that I give them a lower rating, even if they perform well. I am aware of the fact that this happens multiple times, and I convincingly believe that our current mental state definitely affects ideally unbiased rating systems.
Information on the consumer mindset is an extremely valuable resource. Not just for businesses but even for indexing wellness of a particular demographic. This includes all data that pertains to consumer satisfaction, reviews, opinions as well as suggestions.
A lot of this data is now indexed and classified through machine learning algorithms to make sense of the enormous amount of it. Chiefly, these algorithms consider the customers to be unbiased. Even if this assumption is wrong, the algorithms rely on the fact that the biases are essentially stochastic, and their Gaussian distribution errors when added will be eliminated to zero. The analysis would tend to be accurate considering this is true, and google can add this data to its analytics.
At the same time, a major chunk of customer data is received by business analysts who themselves apply the – umm- ‘analytical techniques’ to understand this data (some data needs analysts if it is more sensitive or needs a more human approach to be dealt with – say, maybe Uber ratings). This data is not large enough in number to have Gaussian Errors. This can lead to inaccuracies in making a business decision.
Where I’m getting at is, we should create a scenario where Uber cabs (or other services) should not be rated by their customers, as customers are unreliable and often stupid. Services should be rated by well-designed artificial intelligence algorithms instead, which replicate the sensing for the main features that a human consumer will rate a car by. For instance, one could use the driving performance metrics that self-driving cars employ to assess how safely the driver drove. Or analyze tone of voice, or words to assess courteousness. Or have a picture of the interior of the car to see if it is messy or clean. You get what I’m saying – just automate shit brilliantly!
But now that I think about it further, considering that robotic memory is everlasting and permanent – Do you really want a record of everything that happens in a cab ride? Is this an invasion of privacy? Is the inaccuracies in customer review data even big enough a problem that needs a comprehensive AI solution? Does it dehumanize the whole cab ride experience? Perhaps customer reviews are to be understood by humans themselves, and that we have an innate understanding of picking out the unbiased vs biased reviews. Will automating customer mindset output channels make us even more lazy and stupid, or will it give us time to do other valuable stuff- like run from the Interpol (see Jason Bourne Above)?
So this has ultimately emerged as the AI vs Human intelligence question yet again. In this scenario, which channel of service performance reviews would you prefer to see proliferate in the future. Let me know in your comments.
The Panda of Steel
Note to readers: Usually, when I write a blog I structure the content to form a whole. However, in this post I tried to have it just flow from the start, and let ideas be generated with the flow. I feel the concept discussed is interesting, but asking a question may have lacked a ‘prestige-like’ ending for the post.
What do you guys think – should I do more of these ‘flow posts’ with abstract ideas?