DDO unavailable: Saturday March 30th

Margona

Member
From a purely financial point of view it is never beneficial to outsource your infrastructure, and I'll explain below. There can be secondary benefits that make it better, but it's never the actual infrastructure itself.

Lets say you buy $100K worth of infrastructure, you can then take that as CapEx and budget it over it's ~5 years worth of life expectancy at 20K per year while also taking depreciation as a tax deduction. If you want to migrate your infrastructure to "the cloud" which is just marketing slang for "another mans datacenter", then you will be paying yearly OpEx. Other mans datacenters are just leasing you time on the exact same 100K worth of infrastructure, except they are going to oversubscribe at 2:1 at the best price tier, 5:1 or worse on the lowest tier. They are doing the exact same CapEX + depreciation that you would of done, only now they are charging you $50k/yr to you and two to four other people to use that $100K worth of infrastructure. You will always spend at least double if not triple the amount of money over that 3~5 year lifecycle period, possibly even more since some places will keep low priority infrastructure around for 10+ years. And this is before the tax benefits of using the CapEx + Depreciation model over the OpEx one.

Where "another mans datacenter" can help is in variable loads or multi-region access. If you have an application that runs low utilization for 20 hours in a day but then does insanely high utilization for 4 hours, then you can go cheap for the 20 hours and then pay more for those 4 hours. Also running datacenters, even CoLo's in multiple-regions gets expensive and kind of silly, abstracting it buy renting infrastructure capacity in "another mans datacenter" can make that cost more manageable. The final benefit is simply not having to manage physical infrastructure, which is invaluable for software startups. If a company is just getting going or is super small, they need more flexibility then the 3~5 year infrastructure lifecyle offers, and renting it can come out better.

The economics never work out unless you are seriously overbuying on infrastructure, we're talking 2~3x more then what you need. This is why everyone is moving back to a hybrid model of on-prem or leased rack space with some functions in "another mans datacenter". Equinix has this really cool model where they'll run fiber from a port outside your rack into AWS, Google, Microsoft or any of a couple hundred peoples infrastructure. You then do BGP peering and viola you can directly communicate between your infrastructure and the other service provider infrastructure without having to egress onto the public internet. You can run heavy processing and data on a system you own while having it accessed by a "cloud" service in the single digit millisecond range. Was really cool to see a bunch of EC2 instances running on private storage.
And where, may I ask, did you factor in the cost of personnel monitoring your datacenter 24/7/365(366), and the cost of keeping them competent and relevant.
That's the big cost of having your own.
 

nix

Well-known member
It's unplanned for a reason. Servers died. **** happens. Deal with it.
Is't DDO all on VMs?

Of course, the much vaunted case for virtualisation is that all you need is some [any] hardware to bring the VMs back up.
 

Frantik

Well-known member
It's been over 10 hours now... and I know what Cordovan wrote that next message would be either ETA or "Worlds are up" but I'm getting really impatient here in cold and rainy Europe.
 

nobodynobody1426

Well-known member
So you're saying they failed their DC check? :ROFLMAO:

Oh we clowned on our storage guys the next week, because it's a super expensive system that is supposed to instant ensure cross-datacenter redundancy in the case that we lose one. What it does is synchronize the disk storage systems so all VM Datastores are at both locations and since they are doing a stretched layer tun VLAN the IP address routing also works the same at both. If we lose a datacenter, all services will automatically start back up at the secondary with no data loss.

So .. the system that is supposed to ensure no service outages ... caused a service outage.
 

Blunt Hackett

Well-known member
To be fair, 99% of the time it's SSG's fault. When you make consistent mistakes for 10+ years you are forever guilty until proven innocent.
Servers go down. It doesn't matter if it's a hobbyist or MIT's engineers. Placing blame is asinine because even if someone is to blame, crap happens with computers.
 

Fhrek

One Badge of Honor achieved
OMG, can we please get back to talking about Hamsters and ChatGPT stories?!?
Please wait while we're invastigating the issues.
Thanks for your patience... SSG Hamster shutdown response team!
SOmq5UgajU10SpkRgokw--1--gttkr.jpg
 

nobodynobody1426

Well-known member
And where, may I ask, did you factor in the cost of personnel monitoring your datacenter 24/7/365(366), and the cost of keeping them competent and relevant.
That's the big cost of having your own.

You will be doing operational monitoring regardless, there is no additional cost. I would know, I just got done architecting and then building a brand new Operational Management system using Datadog as the platform.

Personal costs are a different story, that really depends on what your teams need are. For a smaller shop, they can save money by not paying someone $70~130K/yr (location dependent), but that's small peanuts in the grand scheme of things. Again I would know this because we just got done hiring a new body to take over the VMWare/Physical Infrastructure management with a side job of assisting the Windows admin team. I was one of the screeners / approvers for that position. Previously those duties were shared amongst the whole crew, but we really should have a dedicated person for some of this stuff.

Just to illustrate how little that body cost is, we had an outside company come in and do a cloud migration assessment to give us the cost of moving our entire production to a cloud vendor (AWS/Azure/Google). After a month of collecting performance data, the estimated costs would be seven figures per year. Migrating our non-production environments would of cost an additional eight figures. That is when we decided to go with the hybrid model.

This is why AWS, Microsoft, Google, Oracle and Meta do not "move to the cloud". Instead they built their own systems on infrastructure they own and then convince other people to pay them money to rent it. Bare metal at Equinix or Dell Apex are far better models for IaaS, let someone else manage it while you just lease it out over contracted periods of time.
 

Br4d

Well-known member
And where, may I ask, did you factor in the cost of personnel monitoring your datacenter 24/7/365(366), and the cost of keeping them competent and relevant.
That's the big cost of having your own.

You double up on skills by hiring the same people to do your engine work and maintain and optimize the data center. The advantage is that they're always looking at optimizing the engine and network interactions. Neither of the tasks is a full-time thing after initial setup and development but both will improve the product over time and they interleave constantly as conditions and priorities change.
 

ACJ97F

Well-known member
The intern in charge of the 5-gallon bucket of server glue, accidentally made a Dragonborn Monk / Ranger, so he nerfploded.
 
Top