There was an article on Medium yesterday: My First Virtual Reality Groping. In it, a first-time VR/HTC Vive user describes how she was virtually groped by another player inside an online multi-player VR game, within the first three minutes of her first such endeavor, and how it ruined her experience and deeply disturbed her.
I do not know what to call player “BigBro442’s” behavior, but I do know that it is highly inappropriate, and toxic for VR as a whole. This, people, is why we can’t have nice things. This is by far not the first instance of virtual harassment or VR griefing that I’ve heard of, but it’s the one that got me thinking because of this comment on the article:
This is reality. The best we can do is educate, starting with articles like this.
No. That is not true. We can do better than that. Unlike reality, where someone might be assaulted inside their own home, or in some dark back alley, with no witnesses around or evidence left behind, this is virtual reality, which only exists as a sequence of ones and zeros on some Internet server. That server has absolute knowledge of anything that goes on anywhere inside the virtual world it maintains, like an omniscient Big Brother. If virtual harassment happens in virtual reality, maybe virtual reality needs virtual courts.
Here is a not-so-modest proposal, off the top of my head, using SteamVR/Steam as example platforms:
- Any server maintaining a virtual world potentially used by more than one person at the same time keeps a ring buffer of each connected user’s avatar state for the last, say, five minutes. That’s not overly demanding: sampling a head tracker and two hand trackers at, say, 30 Hz, over five minutes, results in approx. 750kB of data total, per user.
- The client user interface of any shared virtual environment contains a button in some easily accessible standard place, say in SteamVR’s overlay, to file a harassment complaint.
- If a user (“Alice”) files a complaint, several things happen. Most importantly, the server immediately dumps the avatar state ring buffers of all connected (or recently connected) players to a file. Second, Alice is immediately charged a small fee, say $5, on the credit card associated with her Steam account. This is a micro-transaction, an existing Steam feature. The fee’s purpose is to discourage another form of harassment, namely filing frivolous complaints against innocent users.
- Files generated by complaints, with personally identifying information redacted, will be reviewed by a peer group of humans. This might be done by appointed moderators, or might even be crowd-sourced.
- If review determines that behavior contained in the 5-minute replay violates community standards, Alice will be refunded the fee she was charged, and offending user Bob’s Steam account will be temporarily suspended, say for one day on the first offense, starting either immediately or the next time Bob attempts to log in. And I mean Bob’s entire Steam account is suspended, not just his access to one particular server or shared VR application: Bob’s on time-out and can go read a book.
- If review determines that the complaint was without merit, nothing happens to accused user Bob, and Alice is not refunded her fee. If Alice disagrees, she can raise the stakes by re-filing the same complaint for another $5 fee, the total $10 then being refundable or not, etc.
- If review cannot reach agreement, or review does not happen within a reasonable time frame, Alice is refunded her fee.
Okay, so this is ridiculous, right? Not from a technical feasibility point of view, which I think I laid out above, but from an organizational and cultural point of view. One might say that it is a severe regulatory overreach, a violation of the freedom and the very fundamental principles of online gaming, and that the idea of community review is ludicrous on the face of it.
Well, I might have agreed — until recently, that is, when I stumbled across this. Holy Moly! What’s that? Multi-player game servers retaining state data of all players, which can be dumped to a permanent file as evidence for later peer review by a number of appointed or self-appointed judges, with crowd-sourced verdicts and suspensions or bans handed out to cheaters, and judges being rewarded or punished for good or bad judgment? And it works?
If cheating in Counter-Strike is a big enough deal to create a system like this, would it be so outrageous to apply the same basic idea to harassment in shared virtual reality, which, due to VR’s strong sense of immersion and presence, arguably has a larger negative impact on the harassed than losing a round of CS?
Discuss.
“Harassment”? It’s just goofing. You can literally teleport away if you don’t like it. Losing a round of CS has financial implications, cos it’s an esport. These two things can’t be compared, because the former is a total non-issue.
So you’re saying that Counter-Strike is serious business? I’m saying that VR is serious business, too.
you should leave your bubble and try to understand the problem.
Is it really that hard to just ignore someone in VR?
Without having much experience in the area, I can’t really evaluate this properly; but I get the feeling the more immature you are, the more seriously you’ll take someone that is trying to get under your skin; I’ve seen it happen in games and stuff, hell, even in text-only chatrooms. But maybe there is some important difference in VR, dunno; or maybe society has decided adults don’t have to behave like adults when someone is trying to mess with them or something (been seeing some sad signs that might be happening more and more lately).
This sorta reminds me of kids playing with laser pointers…
This could just be treated the same way as muting someone in text-based chat. If someone is communicating in a bothersome way, just mute them. In this game it would block their detailed hand movements and they would just use default animations or no animations at all. Or you could give a player the power to create a “safe space” around her that others can’t enter without permission. If its a multiplayer combat game, just kill the person harassing you. Thinking about this is actually giving me gameplay ideas for VR.
If there is ever a widely adopted VR Second Life style game, the anonymity might not be a problem. The in-game avatars would have a reputation, and there could be in-game courts to put the in-game avatars in jail or issue restraining orders. I think solving this problem without breaking immersion would be cool. If someone was “humping” your dead body in Halo 2, all you could do was try to kill them to shut them up or hump them back.
I agree, a block or mute command makes sense. Preferably not just so the player can’t see movements of people she blocks, but so the other player actually can’t get into her space.
Even better: design VR so you can’t touch another player without their explicit consent. That can actually be coded into the software rather that just into the law as it is in RL.
Have you read Julian Dibbell’s Rape in Cyberspace from 1993? It describes a pretty similar situation in the text-based LambdaMOO over 20 years ago, and how the community dealt with it.
http://www.juliandibbell.com/articles/a-rape-in-cyberspace/
One of the analogs in the real world for these types of groups is conventions (pax, worldcon, etc…). Many conventions are staffed by volunteers and have harassment policies that are enforced by said volunteers. I could see combining that with the ability to record the event would make enforcing a lot easier.
Another advantage I could see for enforcement in a VR space is you could have the server watching the users and if a user seems to be receiving harassment a message could pop up in their UI asking if they want to report it. Some other options besides reporting would be to shadow ban the harasser or give the victim a personal space shield that the attacker couldn’t breach or both.
I think that as we start to see the VR social space mature that there will be systems like this put into place.
I think that for a while the harassment problems in VR space will be worse in than in the real world because people will feel like they can get away with it because they “aren’t really doing anything” to the victim and “they can leave if they don’t like it”. People who don’t feel like there is a problem are probably also the same people who are “I’m white and I have never had problems with the police so all these black people who are complaining are probably blowing things out of proportion”.
You’re a fag and she’s an overdramatic idiot
In case you’re not listening to Voices of VR (you should): http://voicesofvr.com/371-online-harassment-in-virtual-spaces/
Way to trivialize the experiences of victims of ACTUAL sexual abuse. Because unlike the griefing “victim” in the story, they can’t just take off the HMD or try a different server.
1) If you had read my post, or the article to which I linked, you would have noticed that neither the author nor I ever claim that she was sexually assaulted or sexually abused. While the linked article does once use the phrase “sexual harassment,” she applies it only indirectly to her experience, in the general statement “Women, after all, are supposed to be cool, and take any form of sexual harassment with a laugh.”
2) Allow me to quote from a comment on the linked article:
“I’m tired of having to look at comments in which some guy argues that she’s insulting real victims of sexual assault, talking about sexual assault victims in the third person. As an other. As an object to be used for an argument.
I don’t feel insulted at all; if anything, the experience of sexual assault creates a pit of empathy in your heart for those who have felt uncomfortable or disturbed by the inappropriate actions of others, regardless of how the general public perceives those actions.”
Another alternative, taken by BigScreen, is to add a ‘personal space bubble’, where other avatars intersecting the bubble turn invisble. (example: http://cdn.uploadvr.com/wp-content/uploads/2016/06/BigScreen-Person-Space-Bubble.gif, full article: http://uploadvr.com/bigscreen-adds-personal-space-bubble-stop-vr-trolls-tracks/)
Good moderation tools are essential, but in this case, you can go a step further and just make the undesired behavior literally impossible.
There are clearly ways to programmatically prevent certain behavior from ever occurring, but I believe it is hard, maybe foolhardy, to try to legislate human interaction through programmed means. The programmers will always play catch-up with users inventing new ways to be dicks (false negatives), and even seemingly specific rules might prevent legitimate interaction from ever happening, or new ways of interaction from developing (false positives, aka the Scunthorpe problem).