Personalization: Machine Mirrors the Ugly Us that We don’t Want to See so We Blame the Machine

There is a fair amount of discussion that Google search, Amazon recommendation and Facebook streams are too much personalized and making us closed minded, the so called filter bubble or echo chamber effect. I’ve been thinking this for a while, and I have a new idea about what makes personalization and recommendation bad.

It is not the machine, not the algorithms. It is human nature. The machine is learning from the human beings eventually, and the machine is just augmenting human reality. My argument is that for a person who has relatively open and balanced mind, the machine personalization results will be fairly balanced, and the recommendation results will serve good knowledge discovery. Only for persons who initially themselves have huge biased opinions and worldviews towards certain things, the machine personalization results will be biased. I don’t think the machine is making things worse, the machine is just reflecting the reality, and it is good that machine is making us see the reality so we can figure out a way to solve it. What results these biased and closed opinions are essentially human nature, and it’s not the machine’s responsibility to solve this problem.

Close minded people always exist, no matter whether Google search exists. Even if there is no Google search and other things, these people will not seek or listen opinions outside of their chamber what so ever. We dream that actually machine can solve this problem by providing opposite and diversity opinions (effort like Findory news), but the thing is not that easy, it is difficult to move people out of their comfort zone, so Findory failed.

To solve this problem, we have to open our mind first, or some of us. Then we figure out a way that could open other people’s mind more effectively, and make the machine do it.

To sum up, what I am arguing it that, at this moment, machine personalization and recommendation is not doing bad things (not that good either, just fact) it’s just reflecting human reality. What we have to realize is that the problem is not the machine, it’s ourselves, machine is just letting us see our flaws that we don’t want to face sometimes. In the future, we need to figure out ways to let the machine do good on this.

Published by Xin_Cindy_Chen

I am a PhD student in engineering education at Purdue University. I have a BS in Electrical Engineering. I am a data scientist dedicated to social good. In order to do so, I need to understand how business and capitals work, because big good can be possible in large scale only when economy is good.

Join the Conversation

3 Comments

  1. I’m not sure I buy this argument. No matter how open minded one is, she may simply not know to look for certain topics. She will start from what she knows, maybe explore slowly, step by step, link by link. But the chances of discovering something radically new and unheard of, and not connected to the initial topic that prompted the search, are slim. With non-personalized news, the odds of coming across something completely new and unexpected are a lot higher.

    1. I agree that even for open minded people, it’s also very difficult to actively look for topics beyond his or her scope. If this is a problem, the machine is just reflecting this problem by following what people want, and people is getting spoiled. I still think personalization in helping people to find things they are interested in is very important, and simply remove personalization may not make people more open to topics that beyond their scope. So we need to make personalization in a better way that would not spoil people but “educate” or “guide” people to important topics that they didn’t know but should know.

Leave a comment