

Does anyone have recommendations to replace Nova?
Does anyone have recommendations to replace Nova?
While I agree with you, I wished that Mehta would entertain the idea of spinning Chrome into an independent company, even though it is still unclear to me how a browser company can generate revenue, especially to pay the salaries of the army of engineers working on Blink.
Aw, chucks
The author did not mention that you as a end user can customize rankings for every website and build custom searches (lenses), which are in my opinion the features that make Kagi unique and more useful than other meta search engines. And to my knowledge you cannot replicate those in SearNXG.
I think the more damning part is the fact that OpenAI’s automated moderation system flagged the messages for self-harm but no human moderator ever intervened.
OpenAI claims that its moderation technology can detect self-harm content with up to 99.8 percent accuracy, the lawsuit noted, and that tech was tracking Adam’s chats in real time. In total, OpenAI flagged “213 mentions of suicide, 42 discussions of hanging, 17 references to nooses,” on Adam’s side of the conversation alone.
[…]
Ultimately, OpenAI’s system flagged “377 messages for self-harm content, with 181 scoring over 50 percent confidence and 23 over 90 percent confidence.” Over time, these flags became more frequent, the lawsuit noted, jumping from two to three “flagged messages per week in December 2024 to over 20 messages per week by April 2025.” And “beyond text analysis, OpenAI’s image recognition processed visual evidence of Adam’s crisis.” Some images were flagged as “consistent with attempted strangulation” or “fresh self-harm wounds,” but the system scored Adam’s final image of the noose as 0 percent for self-harm risk, the lawsuit alleged.
Had a human been in the loop monitoring Adam’s conversations, they may have recognized “textbook warning signs” like “increasing isolation, detailed method research, practice attempts, farewell behaviors, and explicit timeline planning.” But OpenAI’s tracking instead “never stopped any conversations with Adam” or flagged any chats for human review.
“This London Borough” is Hounslow BTW
My reaction when I read this article
Also this interviewee sums it up quite perfectly:
“If I know from looking at company reviews or the hiring process that I will be using AI interviewing, I will just not waste my time, because I feel like it’s a cost-saving exercise more than anything,” Cobb tells Fortune. “It makes me feel like they don’t value my learning and development. It makes me question the culture of the company—are they going to cut jobs in the future because they’ve learned robots can already recruit people? What else will they outsource that to do?”
I still think no-code tools are better suited for building prototypes without requiring development knowledge in order to prove your idea. LLMs just added a complicated extra-step.
I actually switched from Action Laucnher to Nova because the former kept crashing.