Home » News » Law & Order: CVU (Cyber Victims Unit)
News

Law & Order: CVU (Cyber Victims Unit)

Microsoft versus Sony, Battlefield versus Call of Duty and Forza versus Gran Turismo. These are some of the rivalries that can get people talking about console wars. “Game On or Game Over” is your place to get inside the minds of Nicholas and Andy as they seek to find the true meaning of gaming and tackle some of gaming’s most controversial subjects. Both are award winning authors – although the awards haven’t been mailed or created yet — but trust them. Would they lie to you?

Andy: I want to change things up a little bit this week and talk about something that seems to have been largely ignored by the majority of people. Most people who have played online games have had instances where they can’t connect to the servers. Some of those instances are actual issues with the network, maintenance, bugs, or what have you. In other cases they are for more nefarious reasons from outside sources. Those being hackers. I know I have had my share of night of gaming cut short, or unable to happen due to DDoS attacks. It sucks, and it’s not fun.

However on 5th October 2016 the United States Department of Justice brought charges against two of the members of the infamous “Lizard Squad” and ”PoodleCorp”. These are the two groups that have taken credit for outages from some of the most popular games and services; League of Legends, World of Warcraft, Starcraft, Xbox Live, PlayStation Network and several others. They also evidently ran a “phone-bombing” service where they would accept payment to call phone numbers with recorded messages over and over again. On top of all that it sounds like they also were involved in trafficking stolen payment accounts.

That’s a whole lot of illegal behaviour there. I’m glad someone is finally stepping up and at least trying to offer up a deterrent to others doing this. According to the press release the maximum sentence for this is 10 years in prison. Whether they get that or not is a discussion for another day, but I wanted to start this week’s conversation with asking a couple questions. First – do you think this is enough of a deterrent to, if not stop others, at least get them to think about it more? Second – why do you think it’s taken so long for the government (from any country) to finally start to take action against groups like this?

hacker

Nicholas: Let’s begin with the second question first. I think the reason that it’s taken a government this much time to finally tackle an issue like this is because of how most of the ‘traditional world’ still views the internet. The internet and the online community overall is still very much treated like an emerging industry, where those who are ‘out’ of the scene either don’t understand what’s going on, or don’t understand it completely. This isn’t just gaming that it’s a problem but in almost every other aspect where the internet has made an impact. Just think about stories about cyber-bullying and how long it’s taken for authorities or groups to make a move on that. So in a nutshell, the answer to your question is that the world hasn’t yet adapted to what the online community is and what it’s capable of.

To answer your other question, I think at the very least it’ll get some of those, “I just hack for fun because it’s harmless” types to sit up and potentially think twice. Like we’ve said, repercussions for these kind of online crimes are usually unheard of, so now that we’re starting to see backlash I think some will run with their tails between their legs.

You mentioned that the maximum sentences for these crimes could warrant 10 years in jail. I wanted to touch on this a little further. Do you personally think that’s fair, or is further punishment required? If we look at the sort of damage these acts cause, does it do anything to rectify any losses?

Andy: Personally, I don’t think “just” 10 years is enough. I think there needs to be some monetary fines tacked on there as well. Probably not to the individual end user because there is really no way to ascertain who would have played and really attach a dollar value to it. At the same time I think the companies being affected would be able to calculate an estimate on the amount of revenue lost for these instances and I think that (or a portion of it at least) should be added as part of the sentence. I think the harder you can hit people who do this the more of a deterrent it will be not to do it.

I also think we have also touched on why governments are getting involved now is because they are starting to take notice of the amount of money that is involved in the industry. We’re talking about billions of dollars and that’s enough for anyone to sit up and take heed. It doesn’t matter what people think of the gaming industry, when you hit the numbers that they do you have to respect it and start to help them police the behaviours that have been going on. It seems like that’s how things go though. They gain traction, people with ill intentions come around, and then the authorities come in and try to clean it up. Very rarely do the authorities arrive first which is disappointing.

You touched on something in your reply that I think is worth talking about a little further. That being that this isn’t necessarily just a “gaming” problem, but more on a larger scale of how the internet operates. I like that the authorities have stepped in on this, but I also hope this is a sign of what’s to come to try and clean up things as a whole. I’m thinking about, like you said, online bullying, harassment etc. I can’t find the link right now but I’ve seen a couple stories about Twitch streamers being subjected to racist threats, swatting and other types of harassment. People need to start being held responsible for these “harmless jokes” and doing things for the “lolz”. There is probably not enough resources to go after the majority of people who do this, but dinging a few of the most prominent ones should curb some of it. Do you think we’ll see more of this type of response going forward? Or is this an exception to the rule of letting the internet be a Wild West environment where anything goes?

watchdogs_hacking

Nicholas: I certainly see change happening, but it’s going to be a very long process before we see any real movements being made. Sure, we might see some of these Lizard Squad members getting hit (although I don’t expect the final punishment to be as significant as we’ve discussed above) but these are perhaps just some of the more notorious members. All the smaller fish, whether it’s petty attacks or crimes that don’t have monetary impacts (think bullying) are not going away anytime soon. The reason being is that the internet is just so vast and the ways that people can mask their actions are even more widespread. Regular internet users like you and I scratch the service of what’s possible online where we can’t even think to imagine what the possibilities out there are. If we can’t stop people from creating fake accounts to troll YouTube videos, good luck catching people who bring down the networks of major companies like Microsoft.

Again, it’s not to be pessimistic, I certainly think cases like the ones above are a step in the right direction, but as for when we see a real change, that’s well off. I’d like to ask the question though – at what point do these actions of hackers or bullies become significant enough that they should be chased by the law? Is it only once monetary losses are involved? When it comes to trolling, at what point do we stop telling the victim to “brush it off and ignore it” and then get the authorities potentially involved?

Andy: On the surface that seems like a relatively simple question. But, I don’t think it’s really that simple for a number of reasons. How do we define when enough is enough? I have a hard time telling someone how to judge something that is happening to them. There are obviously different levels of online harassment. From the juvenile insults, to more creepy comments, to sexually explicit, to threatening harm. While I agree that some of that is best just to ignore, if I was on the receiving end of other things on that list I’m not sure how I would react.

In my opinion there is a difference between “trolling” and outright criminal behaviour. The criminal behaviour would be when there is a threat or an act of some kind that can cause someone harm. So when someone says, “I’m going to kill you and your family” yeah, the police should be involved. When someone swats someone, yeah the police should be involved. When someone says, “you are a piece of sh*t” on 100 different videos or threads, no, the police don’t need to be involved. With that said if the person of the receiving end has a belief that they could potentially be in danger than by all means they should involve the authorities.

Like you though, I think that it’s going to be pretty difficult to police because of how large the internet is. I do think there are tools out there for the authorities to use, but they probably don’t deem many of these instances as credible threats so they are largely put on the back burner and their attention focused on other things. With all this talk so far it seems like we have been giving a pass to a particular group, that being the people that are running these sites (Twitter, Facebook, YouTube, Twitch etc.). They have to take some responsibility to manage their platforms so this type of abuse doesn’t happen as often, or ideally at all. I’m not talking about public lip service where they release a press release saying they are working on finding a solution. Talk is cheap, it’s time that they step up and start implementing these solutions.

It just seems that in the majority of conversations like this we don’t put enough emphasis on the platform helping to manage the problem. What do you think? Should the sites be doing more to help curb the issue of harassment and bullying? Do you think they can do more than what is currently being done?

hacking-keyboard

Nicholas: That most definitely needs to be the case. Like how police are meant to protect the people of our ‘real-world’ communities, the companies that run services like YouTube and Twitter need to have policies and people in place to police those communities too. It’s not a small task by any definition, but there is certainly some responsibility that sits on them to ensure that anyone breaching those Terms of Use are dealt with accordingly. It makes me think though, there’s been a lot of recent talk regarding the excessive censorship across platforms like YouTube and Twitter and it makes me reflect about what you’ve said above – at what point does trolling cross the line? It’s certainly a difficult situation to be in, and I know I’ve seen a lot of heat directed towards Twitter lately for banning certain popular personalities because they don’t fit in with what most people would consider appropriate.

Keeping on what’s been said above though, YouTube recently launched what they call the ‘YouTube Heroes’ program. In essence it allows regular users to report videos that breach their ToU and for each successful report (in that, a report that turns out to be legitimate) they are rewarded with points/benefits. On-face value it looks to be creating a culture of the people standing up against inappropriate content and users, but there are a lot of people who see it as something much worse. Some claim that it’s fostering a culture of people reporting videos just because it might be deemed in bad-taste or offensive to some, and will lead to a lot of content being incorrectly shut-down or demonetised (which is a big deal for those who make a living off the site). Similarly, it might seem like YouTube is trying to shirk responsibility from themselves and push it onto the users.

What do you think of those situations? You asked whether the owners of these sites need to be doing more, but they’re moving more towards a self-service model. Is it shifting the work away from those who should be doing it, or when we’re talking about a scale as big as the internet, do we need to start getting the users themselves involved more?

Andy: Man, I hate to say this, but I think you are on the money with how you talk about these companies are shifting the role of moderators to the actual users. Like you said, that starts to go down a pretty slippery slope. We you put it in the hands of the users there’s really no telling what you’re going to get as results. What one person finds offensive or abusive the next person could find as humorous. When in actuality all you need to do is interpret it in relation to the Terms of Use. That’s why it should not be up to the users but the actual company. When an end user signs up there is a Terms of Use, that is supposed to protect us from harassment, bullying and threats – it’s up to the company to monitor that. Unless it wants to start paying users to do so, it ultimately should fall on them, not us.

I would like to really circle back to the initial topic that started this discussion. Just this past Friday the internet was hit with a massive DDoS attack. The thing about this one though wasn’t that it targeted just the servers from one or two games, or even just a platform. From what I understand this targeted an internet infrastructure company. This attack knocked out services from several companies in several countries. It doesn’t take any stretch of the imagination to see how serious attacks like these could potentially be. Forget the gaming industry for a minute, scenarios like these can affect first responders, health care, armed forces and a host of other necessary services.

From what I have read DDoS attacks are hard to prevent do to the nature of how a device connects to the internet. If that’s the case ten the penalties for doing these types of criminal activity need to be much more harsh as a deterrent. Yes it sucks when a DDoS attack targets a game that we want to play, but it’s not the end of the world. It’s not life threatening. And there’s no need to really get bent out of shape about it. But when an attack targets the very infrastructure of the internet and could potentially impact emergency services then I have a much bigger problem with that.

To close out this week’s discussion I have a couple questions for you. First, the early rumour is that this latest cyber-attack was orchestrated by PoodleCorp, do you think it was a direct response to the government bringing charges against one of their members? Secondly, do attacks like these expose a flaw in how the internet is laid out and how it’s accessed by millions of devices? Lastly, is it time for crimes like these to have more severe sentences attached to them when you take into account the number of victims there could potentially be?

Nicholas: If we’re being purely speculative I don’t think it’s out of the realm of possibilities that the latest attack could be a retaliation, but more likely I think it’s just another example of a hacker group trying to see how far they can get. When we look at these DDOS attacks and who they target, consider any one against companies like Microsoft or Sony, it’s purely to cause inconvenience rather than ‘payback’. I know movies like to show hackers as doing it to bring the world to it’s knees, but its most likely people just being arseholes because they can. There’s a sense of accomplishment in dicking as many people as possible, and this just seems like more of that.

Do I think these stories expose a flaw in the design of the internet? Sure, but at the same time we also need to remind ourselves that these hackers are geniuses. Scumbags sure, but geniuses nonetheless. While their actions are horrible there’s no arguing that it takes a level of smarts to pull of what they do. Absolutely it highlights a vulnerability but then it’s on companies to have measures in place that mitigate the risk and minimize the damage. “How” isn’t the question I can answer, but it’s the question that needs to be asked whenever attacks like these take place.

You raise an interesting point at the end there and I don’t think you’ll find many people who would argue that some form of punishment is required for situations like these. When someone or a group deliberately attack systems to bring networks down, and those outages result in losses – whether that be as trivial as not being able to play Battlefield 1 with friends or as serious as not being able to respond to emergency calls, there needs to be something done. I can’t answer what the punishment should be, but losses online should be considered as no different to losses in the real world (in that, a financial loss is a financial loss regardless if it’s online or not).

Ultimately, if you’re ever wondering whether your actions are potentially dangerous or not just remember the wise words from Vin Disel in XXX, “don’t be a dick, Dick.”

Tune in next time for the next instalment of Game On or Game Over. If you have any ideas for our next article, feel free to contact Andy or Nicholas on Twitter.


This article may contain affiliate links, meaning we could earn a small commission if you click-through and make a purchase. Stevivor is an independent outlet and our journalism is in no way influenced by any advertiser or commercial initiative.

About the author

Nicholas Simonovski

Events and Racing Editor at Stevivor.com. Proud RX8 owner, Strange Music fan and Joe Rogan follower. Living life one cheat meal at a time.

About the author

Andy Gray

From the frozen land of Minnesota, I was the weird kid that begged my parents for an Intellivision instead of an Atari. My love for gaming has only grown since. When I’m not gaming I enjoy ice hockey and training dogs. I’m still trying to get my Elkhound to add to my Gamerscore though, one day this will happen.