

They don’t need it so I don’t provide it, that’s my primary reason and that should be enough.
It is enough. In fact, it’s better than the “you should trust your SO” argument which doesn’t make any sense.
They don’t need it so I don’t provide it, that’s my primary reason and that should be enough.
It is enough. In fact, it’s better than the “you should trust your SO” argument which doesn’t make any sense.
I didn’t say it’s something you need. Read the rest of my comment.
If you just see this and, like 20 others, blindly say “you should trust your partner” then you haven’t thought about it at all. If you trust your partner completely, then you trust them to use your location information responsibly, right? So trust does not have any bearing on whether to use it or not.
The issue for me is that we should try to avoid normalising behaviour which enables coercive control in relationships, even if it is practical. That means that even if you trust your partner not to spy on your every move and use the information against you, you shouldn’t enable it because it makes it harder for everyone who can’t trust their partner to that extent to justify not using it.
On a more practical level, controlling behaviour doesn’t always manifest straight away. What’s safe now may not be safe in two years, and if it does start ramping up later, it may be much, much harder to back out of agreements made today which end up impacting your safety.
End-to-end ML can be much better than hybrid (or fully rules-based) systems. But there’s no guarantee and you have to actually measure the difference to be sure.
For safety-critical systems, I would also not want to commit fully to an e2e system because the worse explainability means it’s much harder to be confident that there is no strange failure mode that you haven’t spotted but may be, or may become, unacceptable common. In that case, you would want to be able to revert to a rules-based fallaback that may once have looked worse-performing but which has turned out to be better. That means that you can’t just delete and stop maintaining that rules-based code if you have any type of long-term thinking. Hmm.
Yeah, that’s a good point. I guess in light of that what I would say is that, if you are going to have a state-run payment processor, you need to build in a) pluralism (enable and encourage multiple processors) and b) legal protections (legally guarantee that the payment processor has a limited remit in terms of allowing all payments unless instructed to block them by a court order) which would help mitigate or slow down anti-democratic backsliding.
That and (at least for now) it may be difficult to communicate contextual information to an LLM that a human historian or philologist may be able to take in implicitly.
It’s a good point, but a payment processor run by the government would also be under pressure (from voters) to wield its power to suppress marginal content.
Imagine a US-government-run payment processor right now - it would be blocking anyone that sells anything “woke” or “DEI”.
It’s wild that, despite not having any evidence to supprt your theory, you’re still trying to “both sides” this one.
It’s wild that people think that writing a lewd note to a paedophile makes you a paedophile. This is not a discussion where people are going off evidence.
Because people are mostly incapable of using the button as anything other than “I like this” or “I don’t like this”.
The regulations impose additional requirements for a reason - because political advertising can be extremely dangerous. If it’s a question of no political advertising or opaque, microtargeted political advertising that can’t be investigated later, then it’s an easy choice.