E-ISSN : 2508-7894
In this study, we delve into the effects of personalization algorithms on the creation of "filter bubbles," which can isolate individuals intellectually by reinforcing their pre-existing biases, particularly through personalized Google searches. By setting up accounts with distinct ideological learnings—progressive and conservative—and employing deep neural networks to simulate user interactions, we quantitatively confirmed the existence of filter bubbles. Our investigation extends to the deployment of an LSTM model designed to assess political orientation in text, enabling us to bias accounts deliberately and monitor their increasing ideological inclinations. We observed politically biased search results appearing over time in searches through biased accounts. Additionally, the political bias of the accounts continued to increase. These results provide numerical evidence for the existence of filter bubbles and demonstrate that these bubbles exert a greater influence on search results over time. Moreover, we explored potential solutions to mitigate the influence of filter bubbles, proposing methods to promote a more diverse and inclusive information ecosystem. Our findings underscore the significance of filter bubbles in shaping users' access to information and highlight the urgency of addressing this issue to prevent further political polarization and media habit entrenchment. Through this research, we contribute to a broader understanding of the challenges posed by personalized digital environments and offer insights into strategies that can help alleviate the risks of intellectual isolation caused by filter bubbles