Here's my take on pingbacks:
1. The sheer amount of "public" data on the internet allows people to have "public" whispernetwork conversations with the expectation that the problematic person is unlikely to notice.
2. People are unclear about what a "public" toot is. Unlisted? Still public.
3. Because of #1 and #2, pingbacks set up the most vulernable users so that problematic people will get a *notification* of people linking to a problematic website.
This will lead to more online harassment
For every new software feature you want to add, think, "How would a group of persistent, tech savvy people use this feature to harass someone."
Maybe that means you decide not to implement it. Maybe that means you work on something else that would support people who get the majority of harassment.
@sphakos funny, I can't think of a web tech that can't be abused this way
@MightyPork @sphakos I know, I just can't see how anything on Internet can -not- be abused to harass someone
even static sites are used continuosly to that effect
even books are
humanity is messed up
This duality is something I'm struggling with on Aardwolf design. In particular, I want an "aspects" system that lets people present different sides of themselves to different people. To really be useful for some vulnerable classes of people, it needs to be possible to unlink your aspects so that they can't be trivially revealed as the same person. Think alts with single sign on and a unified timeline. (1/2)
But that feature would also absolutely be abused by shitlords to create harassment sock puppets. How do we weigh these options? (2/2)
@gcupc @MightyPork @Efi @sphakos Maybe make the admins see everyone's linked aspects and make the report button affect the whole account.
@MightyPork @Efi @sphakos
Yep, already planned. Not sure it's enough.
@MightyPork @sphakos @Efi I wonder how many of those features ended up implemented in alternate fediverse software
@sphakos from your description here, I feel like it's both good and bad. For more or less the same reason. Like "it's bad that I can see bad people talking about me, and it's good that I can see bad people talking about me so I can prepare if something specific is about to happen"
@sphakos I've always thought of it like "what's the worst thing someone could do with this feature?"
I think the technology itself can only go so far - moderation and curation will still be necessary.
@Wolf480pl @nolan @sphakos cars are bullshit
@sphakos I think you’re indicating a design problem here that’s at the heart of why we struggle to create “social” software that everyone can safely use. Almost nothing helps us create the kind of opt-in, public but not broadcast on the news, conversations we have in person. Instead we get very locked-down environments or abuse vectors.