DDO unavailable: Saturday March 30th

Jack Jarvis Esquire

Well-known member
Ahh, here it is again... the "poor, poor little indie-studio SSG" card. That sympathy card has some serious, unwarranted miles on it.

Pro-tip - They're a multi-million dollar company (not even counting the LoTRO revenue) who's literal entire business is selling 1s and 0s.
If they haven't "justified" having a proper offsite failover plan then they have essentially "decided" they don't want to stay in business if/when they encounter a true DR scenario.


IT history is littered with the corpses of hosting companies who suddenly shutdown / failed virtually overnight.

Trusting your hosting provider to also be your DR site/planner is putting all of your eggs in one basket. Not a good idea.

Redundancy is one of the core tenants of DR planning. Diversity (in your service providers) is one of the core tenants of redundancy.
Yes, but not holding your supplier to provide and execute a valid plan that integrates with and supports your own is an abject failure in your supply chain management. Without going into details, I've been responsible for commercial deals for UKGov in highly sensitive IT contexts including NI for many years. I'd expect to be fired if I wasn't across something as basic as that. Clearly for a game the stakes are lower, but it's still a vital business requirement in context.

I don't think we're disagreeing here. I'm just clarifying the original point I was making. ?
 

Kimbere

Well-known member
PSA. Scroll past if you don't care. ;) :)

It's TENET, not TENANT. Tenant is either that annoying guy who rents the apartment next door, or David's last name missing an N.

(yes, I'm bored.)
You are correct. I failed at life and feel bad now.

j/k Thanks for pointing it out. Most days I know that, but apparently my spelling brain is on Easter break today. I'll edit the post and fix it.
 

Silvorn

Well-known member
It would be nice if I was reading more messages of encouragement to the IT team working this issue instead of people being critical. Apparently a lot of people have had the luxury of never working at a job where there was a serious problem and therfore lack understanding and empathy for the people working on this for us.
The standard in customer service has become. "We are aware there is a problem and are diligently working to resolve it. Thank you for your patience."

The customer gets treated like a mushroom. Kept in the dark and fed BS.

Twenty years Navy as an electrician and another twenty in food production as maintenance. Communication helps.
 

Frantik

Well-known member
ah ... ty for making me read up

Data centers can be anywhere....they don't have to be in a building designed for servers and protected from catastrophes.

I don't need to read up to know that it's common for office space that does not get rented to be leased to companies as data centers. Any space will do to keep cost effectiveness.

I doubt SSG uses a fancy provider with well protected infrastructure.
I am frustrated as well, but that is just pure speculation. I would imagine that the inverse could well be true; SSG and Daybreak Games make good money from games such as DDO and LOTRO... it would be extremely foolish not to choose a Data Centre with surge, fire and lightning protection.
 
  • Haha
Reactions: DBZ

waxinglyrical

Active member
not in the real world
It would be a seriously flawed business decision to site a data centre in a leaky old building. I can see the advertising literature now: Welcome to our data centre, sited in an elegant Jacobean grade I listed building replete with wattle and daub walls and ceilings with original beams and thatched roof.
 

CBDunk

Well-known member
Imagine calling a studio with licenses from both WotC and Tolkien "indie."

Minor technical quibble... the Tolkien estate / family hasn't owned the rights to The Lord of the Rings derivatives since the 1960s. Currently, and for most of that time, they've been held by Saul Zaentz. SSG's license, like the Peter Jackson films and that thing Amazon did, is with him... not any Tolkien.

That said, the overall point is valid.
 

Oliphant

Well-known member
It would be a seriously flawed business decision to site a data centre in a leaky old building. I can see the advertising literature now: Welcome to our data centre, sited in an elegant Jacobean grade I listed building replete with wattle and daub walls and ceilings with original beams and thatched roof.
Tell me you don't know about thatched roofs without telling me. :geek:
 

liosliante

Well-known member
They should tell us something from time to time. Not telling us is like dragging our feet.
I don't know if it is the correct expression.
 

Sweyn

Well-known member
Looks like they're making progress. My launcher is now loading more things instead of just saying "Initializing".... Game still down though.
 

New friend

New member
people that pay a subscription for this game should really reconsider it. down time for ~2days is mind boggling. No rollback, lack of contingency plan and mitigation plan, how is their issue management process this bad, escalation path for issues. why are you paying a sub fee, if they can't intelligently manage a game?
 
  • Like
Reactions: DBZ

Questor56

Member
LOL, We <3 them doing their job!!!!!!!!
Yes rohmer I luv people "Doing their job" In particular I really appreciate Public Service people who worked long overtime hours to get power restored multiple times in my life. And they are not the only ones. Shout out "Thanks" to anyone on this forum who has done that work.
 
I have to laugh at all of you who are all in on the "a good data center is foolproof! They must have cheaped out!" Two real world examples, both Microsoft Azure: One time they were doing cleanup and had a script to delete old SQL VM instance backups. Unfortunately the script was written wrong on instead of deleting "old backups" it deleted the live customer VMs. Every SQL instance running on the whole East Coast node went poof. Oopsie. That was a three hour outage for my company. Another real case, they somehow had a whole rack of management servers that was totally outside all the fault tolerance mechanisms. No backups, no secondary power supply, not even scripted into the VM management systems. And then they had a power failure, and all the actual customer machines were fine, they failed over fine, but this rack of network routers, VM managers, etc went down hard. And when it got back on power and rebooted, there was nothing to tell the master VM manager that these servers were supposed to be core infrastructure, and it just saw a bunch of high power servers and started spinning up high price tier customer VMs on it, overwriting all the critical VMs that weren't backed up. That one didn't hit my company, but it was a 24+ outage for a lot of Azure customers.
 
Top