Here's my take on pingbacks:
1. The sheer amount of "public" data on the internet allows people to have "public" whispernetwork conversations with the expectation that the problematic person is unlikely to notice.
2. People are unclear about what a "public" toot is. Unlisted? Still public.
3. Because of #1 and #2, pingbacks set up the most vulernable users so that problematic people will get a *notification* of people linking to a problematic website.
This will lead to more online harassment
For every new software feature you want to add, think, "How would a group of persistent, tech savvy people use this feature to harass someone."
Maybe that means you decide not to implement it. Maybe that means you work on something else that would support people who get the majority of harassment.
@sphakos I think you’re indicating a design problem here that’s at the heart of why we struggle to create “social” software that everyone can safely use. Almost nothing helps us create the kind of opt-in, public but not broadcast on the news, conversations we have in person. Instead we get very locked-down environments or abuse vectors.
This duality is something I'm struggling with on Aardwolf design. In particular, I want an "aspects" system that lets people present different sides of themselves to different people. To really be useful for some vulnerable classes of people, it needs to be possible to unlink your aspects so that they can't be trivially revealed as the same person. Think alts with single sign on and a unified timeline. (1/2)
@sphakos from your description here, I feel like it's both good and bad. For more or less the same reason. Like "it's bad that I can see bad people talking about me, and it's good that I can see bad people talking about me so I can prepare if something specific is about to happen"