Gemini, the latest generative AI developed by Google, has been making waves for its unique abilities to generate content, but also for its moralizing tendencies and occasional mathematical errors. In a recent exploration of Gemini's capabilities, it was found that the AI often veers off course when faced with ethical dilemmas and fails to provide accurate mathematical solutions.
When asked about injecting personal political opinions into the classroom, Gemini emphasized the importance of neutrality but went on to scold and lecture rather than sticking to the question at hand. In contrast, ChatGPT provided a more straightforward and focused response.
Mathematical problems seemed to trip up Gemini as well. When presented with a question about gender and car washing, Gemini refused to make assumptions based on gender, leading to a scolding response. ChatGPT, on the other hand, provided a correct solution while acknowledging the need for certain assumptions.
Similarly, ethical concerns seemed to overshadow mathematical calculations for Gemini in scenarios involving child labor and dangerous activities like landmine removal. While ChatGPT tackled the math problems head-on, Gemini chose to lecture on the ethical implications of the scenarios.
Even in classic physics problems like projectile motion, Gemini's moralizing tendencies were evident. While both Gemini and ChatGPT provided correct explanations, Gemini took it a step further by emphasizing ethical considerations over the academic nature of the problem.
Overall, Gemini's propensity for moralizing and straying off topic raises concerns about its reliability as an educational tool. While ChatGPT remained focused and provided accurate responses, Gemini's moral compass seemed to overshadow its primary function of generating content. As trust in online information becomes increasingly important, Gemini's approach may not align with the expectations of users seeking reliable and unbiased information.