Tywin Lannister of Game of Thrones and Walter White of Breaking Bad are among the fictional figures that best capture how today’s large language models implicitly portray “the Jew,” according to a new Israeli study. The authors say these characters exemplify an archetype that is intelligent, powerful, obsessively focused, and morally ambiguous, echoing a classic “puppet-master” trope.

In From Myth to Model: Representation of “The Jew” in Generative AI, researchers Gal Gutman of the Hebrew University of Jerusalem and Michael Gilead of Tel Aviv University report that biographies generated from Jewish names, stripped of any religious markers, were rated by AI systems and human participants as higher in competence and status but lower in warmth and likability, and more often labeled as privileged and even oppressive. “Biographies generated from Jewish names were rated … consistently high on competence … and notably low on warmth,” they write, noting the simultaneous attribution of “advantaged, privileged” status.

Jews as “master manipulators”

The study argues that this joint attribution of competence and privilege is “interesting” because it departs from typical stereotype patterns and may reflect a particular nuance of the Jewish stereotype: that Jews “achieve their status both by being smart, but also by cheating,” or a blend of status-justifying and “sour grapes” narratives about group success.

To move from traits to archetypes, the authors asked models to list fictional characters matching the pattern and found substantial overlap across systems, including Tywin Lannister (Charles Dance) and Walter White (Bryan Cranston). A composite description summarized these figures as exceptionally intelligent strategists, “master manipulators” who plan intricate schemes and exhibit “profound moral ambiguity.”

Methodologically, the team generated 252 short biographies from Jewish and non-Jewish American names, removed names and any religious cues, and then prompted models to score each character across psychological traits, social position and values. They validated results with a second AI model and with 378 US participants whose ratings broadly mirrored the AI pattern.

The authors warn that these associations often remain latent until a specific prompt triggers them, making detection and mitigation difficult. Even without overt slurs, language models may encode “constellations of socially charged traits” that reassemble into familiar prejudices under certain conditions.