Solarpower
Well-known member
This one is more accurate. Guess we all know what've happened here...This image was (allegedly) smuggled from the datacenter.
This one is more accurate. Guess we all know what've happened here...This image was (allegedly) smuggled from the datacenter.
Yeah but what recovery SLA did they pay forIf they've outsourced their DC then a DR plan would be expected of the supplier. Or somebody hasn't procured right.
multi-million profit margins? I doubt it. It doesn’t matter what your gross income is. Can they afford high availability infrastructure? We don’t know. Clearly they don’t have it now. This isn’t the first multi-day outage. They seem willing to accept the risk. Next time they have data center issues, it’s likely we’ll be down for a couple days again.Ahh, here it is again... the "poor, poor little indie studio SSG" card. That sympathy card has some serious, unwarranted miles on it.
Pro-tip - They're a multi-million dollar company (not even counting the LoTRO revenue) who's literal entire business is selling 1s and 0s.
If they haven't "justified" having a proper offsite failover plan then they have essentially "decided" they don't want to stay in business if/when they encounter a true DR scenario.
IT history is littered with the corpses of hosting companies who suddenly shutdown / failed virtually overnight.
Trusting your hosting provider to also be your DR site/planner is putting all of your eggs in one basket. Not a good idea.
Redundancy is one of the core tenets of DR planning. Diversity (in your service providers) is one of the core tenets of redundancy.
I have to laugh at all of you who are all in on the "a good data center is foolproof! They must have cheaped out!" Two real world examples, both Microsoft Azure: One time they were doing cleanup and had a script to delete old SQL VM instance backups. Unfortunately the script was written wrong on instead of deleting "old backups" it deleted the live customer VMs. Every SQL instance running on the whole East Coast node went poof. Oopsie. That was a three hour outage for my company. Another real case, they somehow had a whole rack of management servers that was totally outside all the fault tolerance mechanisms. No backups, no secondary power supply, not even scripted into the VM management systems. And then they had a power failure, and all the actual customer machines were fine, they failed over fine, but this rack of network routers, VM managers, etc went down hard. And when it got back on power and rebooted, there was nothing to tell the master VM manager that these servers were supposed to be core infrastructure, and it just saw a bunch of high power servers and started spinning up high price tier customer VMs on it, overwriting all the critical VMs that weren't backed up. That one didn't hit my company, but it was a 24+ outage for a lot of Azure customers.
That's just a train. Don't have you hopes up and, please, get off the rails !Maybe a little light at the end of the tunnel?
No trains in my tunnel! No rails. But perhaps my eyes playing tricks on me?That's just a train. Don't have you hopes up and, please, get off the rails !
Same hereI tried logging to see if anything had changed... it tries, and for the moment fails, to connect to "patch server". Maybe a little light at the end of the tunnel? We shall see...
Nobody said multi-million dollar margins. It does matter what your gross revenue is though. It matters a lot.Yeah but what recovery SLA did they pay for
multi-million profit margins? I doubt it. It doesn’t matter what your gross income is. Can they afford high availability infrastructure? We don’t know. Clearly they don’t have it now. This isn’t the first multi-day outage. They seem willing to accept the risk. Next time they have data center issues, it’s likely we’ll be down for a couple days again.
What's interesting is in this cross-post, Cordo mentions despite their redundancies they were able to get it working. That suggests they did indeed have a DR plan but perhaps it hadn't been gameday'd any time recently and ended up not working out.If you can't devote 2-3% of your gross revenue to a DR plan to ensure business continuity in the event of a serious disaster, you've got serious management/executive board level issues.
When your companies entirely livelihood depends on the games being operational
"Imagine" he said...Just imagine if WOW , FFXIV, or GW2 went down this long with no timeline or anything.......