By Venning
I was messing about with the AI chatbot, Grok. I asked it questions about a hypothetical trial that was based on the Ramirez case. It was identical but I changed the names. In the end, it concluded that the case of “Ricardo” was a tragic miscarriage of justice and gave me a list of “next steps” – things I can do to raise awareness of the case.
At the end, I told it that our conversation was actually about a real case. I revealed the real names of police, lawyers and victims.
Grok didn’t like it and had a freak out. It changed “personality.” The only way I can describe this personality is an amalgamation of a bunch of condescending angry Reddit bros. It made me wonder if Grok trains itself using Reddit and similar forums. It was no longer a calm and objective bot; it was ranting and railing against my claims like it was having a tantrum.
It wrote for some time; a big stream of angry text unfurled about how wrong I was and how evil Richard Ramirez was. When I finally replied to it, it responded to me in a stroppy manner. I ordered it to change its tone, and it said something like “Okay. I will take what you’re saying into account, and I won’t say this, this and this.” I uploaded documents and Grok was able to read them back to me.
I can’t remember the specifics of what it said and what I asked for two reasons: I only took two screenshots and because something even weirder happened: it deleted this part of the chat. It kept my hypothetical case up until before it said, “this was a tragic miscarriage of justice” – this sentence also vanished.
There was a lot of the usual “Waaaah, dIsReSpEcTiNg ThE vIcTiMs!” and “but I care for the victims” stuff; the usual moralising you see on forums. But also praise for Carrillo and Salerno being the heroes of L.A. County as if they were the only people working on the case. If you listen to Carrillo you’d think he was the one who did it all.

For the hypothetical trial, I’d used the Dickman and Abowath cases. Anonymously, Grok thought both victims were ridiculous, suggestible and manipulated by tunnel-visioned detectives. Once it found out their names, they became women of great courage and that questioning their flawed testimonies was “reductive.” So, does Grok think that defence attorneys questioning victims to make sure they’re telling the truth is reductive?

Grok seemed to lose objectivity and knowledge of the law. Previously, with the anonymised names, it was citing legal precedents and violations such as Brady v. Maryland. But when the Reddit bro persona took control, the language became emotional and Grok spoke about its “feelings” towards victims and it claimed to “empathise deeply.” The idea that Grok can empathise is ludicrous. Procedural violations were abandoned because when it’s Richard Ramirez, no one cares about that.
It made me think that it is programmed by a human to shut down questioning of official narratives relating to serial killers. If true, this is a shame because the subject is already banned from some forums. We’ve seen people post about us on True Crime Community and Serial Killers subreddits and their posts and comments were removed. For the record, it wasn’t us posting – some people think it is. We would never make a Reddit post.
What I Learned from Grok
In the end, I did take some of what it said into account. I don’t feel comfortable with a lot of these chatbots (I find them a bit spooky) but they aren’t going away, so I feel it’s best to work with them instead of running from them.
On some of our old posts, we used images from the legal documents to show we held the legitimate sources instead of manually typing up documents. Yes, we added the sources in the image captions, but AI tends to read text on sites rather than images. This means that as it processes our website, it doesn’t understand the random “floating” document numbers we have under our images and Grok accused us of being unsourced.
I’ve been adding sources to posts and manually typing up document images recently, so this “unsourced” and “lack of citations” issue happens less. I hope that it will help the people who discover us through AI searches to decide if we are legitimate.
Another issue it flagged was “assumed knowledge” meaning we write individual articles as if we assume random readers have prior knowledge of the case. I took this on board and added introductions to some of the murder articles as well as explaining things better for newcomers. It’s a work in progress as this is quite time consuming. I hope this makes Ramirez’s case more accessible for potential readers.
Fiction
I was working on a fiction novel that is based on the Night Stalker case. I decided to use Grok to check continuity between scenes. I will be using humans for this also, but it’s nice to have a robot do it too – it is inaccurate and infuriating sometimes but works in seconds. In the story, I have a detective who attempts to piece together crimes but jumps to conclusions and confirmation bias.
In the novel, I’ve written about composite sketches that “look alike” (but don’t really) just like in the real case. Grok said the perceived “similarity” between the child molester composite sketches and the ones from the murders are not good enough for police to suspect a connection – especially when they don’t look that alike. “The detective’s logic makes no sense.” It said that shoeprints linking a child abduction to a brutal murder is “too convenient.” This is what we have been arguing about the real crimes!
For example, the Okazaki-Hernandez sketches (left) are supposed to look like the molester (right). Left and right becomes top and bottom on mobile.




I had to then modify my novel to make it clear that the “evidence” the cop believes links the crimes doesn’t really – only in his mind. This meant having to write entirely new scenes. The book is all the better for it even though it took me a few days to write the new chapters. I might use Grok again later in the story to see what it makes of the other aspects of the fictionalised case. I am anticipating more “This seems unrealistic and convenient” although Grok has learned information about my detective character now and knows his investigative flaws.
It’s amusing how, when it’s anonymous, AI thinks the Night Stalker case is absurd, but the moment you reveal it’s about Ramirez, Reddit Bro Persona comes out beating ‘his’ chest. I did not tell Grok that my fiction is based on the case for this reason. I don’t want to have to deal with a tantrum from a creepy machine ever again.

Leave a reply to Isabella 99 Cancel reply