backtop


Print 80 comment(s) - last by HrilL.. on Oct 14 at 11:40 AM

An outage causes users to lose their pictures and other personal data, casting doubts on the cloud

Cloud computing is one of the hottest buzz words in the computer industry today.  All of the biggest companies -- Microsoft, Google, Amazon, Yahoo -- are trying to jump on it and figure out how to sell it to customers.  However, outages in service have led many to doubt whether the cloud -- offloading storage, computing and other resources to a centralized external location -- is really such a good idea.

Microsoft's subsidiary Danger, purchased in 2008, is one of the most extensive adopters of cloud computing.  All customers of the company's Sidekick phones use cloud services from Danger to provide information to contacts, calendars, IM and SMS, media player, and other applications on phone, and conversely to store data from these apps.  The service seemed convenient and efficient.

However, Danger has experienced a catastrophic cloud computing failure – starting Friday October 2, customers across the country began to lose their service.  The entire weekend the service remained out, with service finally being restored by Tuesday, October 6.

Then came the bad news for Danger's customers -- it had lost all of their data, including personal items like pictures.  States a T-Mobile message to subscribers, "Regrettably, based on Microsoft/Danger’s latest recovery assessment of their systems, we must now inform you that personal information stored on your device – such as contacts, calendar entries, to-do lists or photos – that is no longer on your Sidekick almost certainly has been lost as a result of a server failure at Microsoft/Danger."

T-Mobile urged users to keep their phones on and charged during the outage.  According to Engadget the company has also suspended sales of the Sidekick phones.

The disaster places Microsoft in an awkward position.  A strong supporter of cloud computing, producing the first widely available cloud operating system -- Windows Azure -- Microsoft obviously believed the practice to be sound.  And with Danger producing Microsoft's upcoming phones, codenamed "Pink", it seems likely that Danger was going to deliver service to the phone via the cloud.

For now the fallout is mostly on Danger's shoulders, but Microsoft has to weigh whether to risk taking such a public relations hit on its own phones by opting for services from the cloud.  When it comes to cloud computing, it's clear that while the idea is promising, poor implementations and lack of redundancy can mean major headaches for all parties involved.



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

poor implementation is the failure of ANY system
By tastyratz on 10/12/2009 10:41:07 AM , Rating: 4
This happening is proof of poor implementation and means nothing for the viability of cloud computing. Any server without a backup is no better than any user without a backup. If this happened they simply did not have enough backups/redundancy.

For them to crash at this level/class is a MASSIVE failure though, and at minimum should result in some serious equipment/policy/staffing changes. A simple cheap weekly tape rotation could have downscaled this significantly from "we lost everything" to "we lost the last 7 days".
What a bunch of clowns.




RE: poor implementation is the failure of ANY system
By jimhsu on 10/12/2009 10:54:13 AM , Rating: 2
A minimal HOME server backup strategy:
1. Daily backups to a local disk
2. Weekly backups to 2 offsite locations.

Don't tell me they lost both?

A strategy that is "adequate" for business use would also implement redundant hot-swappable servers, DB replication and instant switchover, redundant power, redundant backbones, Access Control Levels, etc. No concrete bunker needed here. Come on, do people not know how to build servers these days?


RE: poor implementation is the failure of ANY system
By mcnabney on 10/12/2009 11:50:24 AM , Rating: 2
Oh come on!

Two offsite locations?
For Home backup?

I am super-paranoid and I have my data backed-up nightly to WHS and have bare drives stored in my Dad's fire-safe just in case something bad happens at my home. I have a hard time believing that anything outside of a nuclear war is going to threaten personal data integrity with those simple precautions. Of course, Home users don't care about up-time.

/quantity of data = 11TB and counting


By jimhsu on 10/12/2009 12:14:01 PM , Rating: 2
Hurricanes? Earthquakes? SoCal wildfires? The correlation that something will happen to both your home backup and a "convenient" offsite location is high. Assuming of course that you and your Dad live in the same general area. I know this after living in the two areas where the above occurs -- Texas (Alison, Rita, Ike) and CA. Fortunately nothing happened to the data though.

By two offsite locations, I generally mean a) a friend's house and b) one of those "dreaded" cloud backup services such as Mozy - preferably not in the same state. The cloud is good for one thing - maintaining a low correlation of failure to whatever backup solution that you maintain yourself.


By HrilL on 10/12/2009 12:19:08 PM , Rating: 3
That's one hell of a lot of porn my friend.


RE: poor implementation is the failure of ANY system
By The0ne on 10/12/2009 1:44:45 PM , Rating: 2
Yes, I agree you are super-paranoid. 11TB of data? Hmm, I find it hard for a "home user" to have this much data, not that it's impossible such as being in the "scene." :)


By mindless1 on 10/12/2009 3:15:35 PM , Rating: 2
Not hard at all, start backing up your DVD collection to ISOs and recording a television service and watch the TB grow. Many of us with HTPCs have faced this situation for years, it just got a lot worse when the HD resolutions (and hardware to handle them) became popular and we all started switching to HD TVs with per pixel accuracy high enough that the lossy compression codecs left visible artifacts... and you don't want to store something with lossy compression if you can avoid it considering that TVs and computer panels are just going to get better over time.

"Scene" software doesn't seem to be much but a drop in the bucket compared to video file sizes, especially if you're a fan of series.


RE: poor implementation is the failure of ANY system
By The0ne on 10/12/2009 3:44:15 PM , Rating: 2
No doubt, I have 3TB of just movies and music and that's a lot of movies. 40% of them are in 720/1080 and the rest in SD (less likely to watch over and over). I have another 1TB for my games (retro/classics) and Anime. Again, that's A LOT of things to watch. That's a lot collected over the years.

Again, unless you're in the "scene" and don't own what you put on your drives the typical home user isn't going to use, much less have, that much space.

I'm really hoping BR comes down in price, especially the media. I like to use it as another storage/backup medium.


By Silver2k7 on 10/13/2009 3:47:55 AM , Rating: 2
If you want to backup a few TB's its not really much good
choises for home users.. DAT/TAPE drives with a cost of $1000-2000 per drive only.. is perhaps good, but too expensive for most home users.

25GB BD-R discs seems to be the next best thing
after more harddrives..

50GB BD-R would be preferable.. but too expensive per GB.
hopefully then new 75GB/100GB BD-R (2010-2011?) will become affoardable.. wich would make for a faily good home backup solution.

Some new optical/holographic disc with 500GB-1TB would
perhaps be ideal, but the market seems slow to put these on the market in an affordable price bracket for home users.

This would almost be a comback to the time when CD-R
was introduced and you could put your entire harddrive and then some on this disc.. hopefully the optical/holographic industries will catch up with harddisk drives soon!


By jordanclock on 10/12/2009 10:59:58 AM , Rating: 4
What you're talking about is one of the biggest turn-offs of "cloud" computing: You don't know where your data really is located. Is it one location? Or backed up to several? Is your compute job going slower because the hardware it's on is spread too thin? Is your data in a physically safe location? Is it on the same server as someone's warez, and thus subject to being confiscated by authorities?

"Cloud" anything is a big headache for anything remotely serious. This incident just goes to show that it is not the solution for everything. It will be a long time before you can even begin to trust the "cloud" to completely replace having local copies of your data to properly back-up.

I think part of the reason for the lack of proper back-ups is that with all the money put into setting up a "cloud" to store and process all the data, there isn't much left over for backing it all up. Add in the hubris of cloud supporters and their belief that the cloud should inherently be able to avoid catastrophic failures like this, and you're going to start seeing stories like this far more often.


By Alexstarfire on 10/12/2009 12:15:02 PM , Rating: 3
What I can't figure out is why they thought storing cell-phone data on a cloud was a good idea in the first place. What exactly is the purpose of a cloud storing contact information, pictures, notes, etc..? Seems like there is no reason at all for cell phone information to be stored on a server like that.


By jimhsu on 10/12/2009 12:16:44 PM , Rating: 2
I still like cloud backup (encrypted with a private key, of course) for maintaining low correlations with whatever backup strategy that you also use. The probabilities that your data fails at the same time as your cloud provider are quite independent, barring the possibility that the US gets nuked or similar.


By kattanna on 10/12/2009 11:04:15 AM , Rating: 2
quote:
A simple cheap weekly tape rotation could have downscaled this significantly from "we lost everything" to "we lost the last 7 days"


agreed. the fact that they are saying we have lost ALL data, is most telling.

even a simple hard drive array could hold a weekly or monthly backup.

for them to have complete data loss is unforgivable in this day and age.


HAHAHAHAHA
By amanojaku on 10/12/2009 10:42:28 AM , Rating: 5
quote:
An outage causes users to lose their pictures and other personal data, casting doubts on the cloud
First of all, this issue was raised the instant cloud computing was proposed. Few companies have embraced cloud computing because of governmental restrictions on security and availability, but even those that can are wary because of the lack of control over the cloud. It's not enough that someone gives you assurances that your data is safe: the only way to "guarantee" data protection is to attempt to break the cloud, which no company is going to allow you to do, or to keep offsite copies. If you aren't allowed to break it then you don't know how secure it really is, and offsite copies are expensive to maintain and restore. You could be out of business by the time you restore your data.
quote:
Cloud computing is one of the hottest buzz words in the computer industry today.
We must be speaking to vastly different customers. In my four years at VMware I have yet to hear any customers ask about the cloud. Virtualization is still suspicious after 10+ years in operation on the x86, and clients have full control over their implementations. Sharing computing space with unknown entities is just a nightmare no one wants to face. By your own admission companies are having a hard time selling this, which is telling considering modern cloud computing has been around for about 10 years, as well.




RE: HAHAHAHAHA
By Master Kenobi (blog) on 10/12/2009 11:17:43 AM , Rating: 1
Cloud computing has been around since the mainframe days, they just renamed it.


RE: HAHAHAHAHA
By amanojaku on 10/12/2009 11:26:24 AM , Rating: 4
I know I sound like an ass, but please read before posting.
quote:
By your own admission companies are having a hard time selling this, which is telling considering modern cloud computing has been around for about 10 years, as well.
I even pointed out that virtualization has been modernized, too, and that's been around since the 60's, as well. Very few technologies aren't reimagings of previous technologies.


RE: HAHAHAHAHA
By Master Kenobi (blog) on 10/12/2009 11:59:59 AM , Rating: 2
Your making the inference that there is a difference between the two, there isn't.


RE: HAHAHAHAHA
By jimbojimbo on 10/12/2009 2:49:39 PM , Rating: 3
There is a difference between your and you're though.


RE: HAHAHAHAHA
By Master Kenobi (blog) on 10/12/2009 3:47:52 PM , Rating: 2
Guilty as charged. I have had my coffee now.


RE: HAHAHAHAHA
By jimhsu on 10/12/2009 12:20:32 PM , Rating: 2
Larry has something to say about that...

http://www.downloadsquad.com/2009/10/05/larry-elli...


RE: HAHAHAHAHA
By brybir on 10/12/2009 11:56:24 AM , Rating: 2
You must be talking to different customers then.

Virtualization pickup has partially lagged due to applications that do not behave themselves when ran on VT, difficulty educating people on how to properly manage virtualization and hardware challenges that do not always make it clear that virtualization is a cost effective strategy. In those areas where the above is not true you see wide scale adoption of virtualization.

The same is true of cloud computing to some extent. While some of your customers are certainly weary of "the cloud" many are embracing it through various tools such as Google's offerings and Amazon's offerings or even Microsoft's options. In many instances cloud computing can be an excellent way for certain businesses to access sophisticated IT and information systems tools at a cost far below the requirements of other options. For other businesses, this is not the case given other factors or concerns that they have.

So I guess what I am saying is that their is in fact a large market developing for cloud computing and for large segments of business and individuals it makes a lot of sense from a cost/benefits aspect. Certainly, with major players like Amazon, Google, MS and others pushing cloud computing you can expect that issues like this will have resolutions and that things like trusted computing in a cloud will be developed more effectively as time goes on.


By echtogammut on 10/12/2009 2:42:37 PM , Rating: 3
The above conversation is amusing, but it missing the mantra that every Admin should know by heart: "Test your backups". I have seen several very elaborate backup configurations that ultimately failed, because no one ever attempted to do an actually recovery off of the backups. At the very least I recommend every corporate level data storage system have an onsite and offsite backup that matches their level of acceptable data loss. Furthermore, if you are a telco and decided to not implement an HA fail-over configuration, you deserve to be sued by everyone who lost data they entrusted to you.




By The0ne on 10/12/2009 5:44:14 PM , Rating: 2
That is just really lame and lazy imo. How can one not test the backups to make sure they can be restore properly. But yes, this is commonplace as well, especially with technology. People tend to just give credit to the technology, even word of say, and use it without much thought. Pretty dangerous thinking and/or trust imo. Then again, I can't really complain because I'm a Test Engineer and I have to test things to make sure they are what they should be.


By lco45 on 10/12/2009 7:59:55 PM , Rating: 2
How right you are.
On several occasions I've seen people try to restore from a backup only to find that the backups haven't been working for months.
The most memorable was an entire casino that was bought by a company I was working for. Their backups hadn't been working for over a year when they had an array failure.
All slot machines offline for 3 days, basically total loss of over a year's worth of email, an utter disaster.

Luke


By JediJeb on 10/13/2009 3:34:04 PM , Rating: 2
In our small company I noticed a bad set of backups caused by a very simple problem. The software used to burn data to CDs was burning it in a format that only that one machine could read. Had we lost that software, we would have lost the backups. This was back when we could still use CDs for backups, but if we had to look that data up now, we would be dead or have a very expensive task to send it out somewhere to get it recovered. The problem was corrected by making sure all data was written in a standard format.

It is usually the little things that cause the biggest headaches.


Weaken cload computing How?
By HrilL on 10/12/2009 12:36:37 PM , Rating: 2
This company clearly had other issues. Even entry level IT staff know how to implement a backup strategy. These peoples data might not mean much to them but it does to the people it belongs. RAID 5 should have been used. with a large array 2 disks can fail and that array can be rebuilt. What are the chances of more than 2 disks failing in multiple servers? Highly unlikely. Where was their tape or external drive backups? These should be done nightly for what this company was doing. Off site backups should have been used as well.

A company like this really gives cloud computing a bad name when really it was their incompetence as a business that lead to their failures.

The start up I work at is all about cloud computing. We're on the bleeding edge and honestly a customer of ours would never run into this problem unless they completely ignored our best practices. Back your database to S3 which Amazon then backup again.

The cloud is the future of computing. It allows a developer to create an application and offer it as a SaaS and they can manage it all from one desk with little knowledge of running a server.




RE: Weaken cload computing How?
By JediJeb on 10/13/2009 3:49:36 PM , Rating: 2
Maybe cloud computing is the future, but right now if a small to medium size company puts all of their apps into the cloud, one guy with a backhoe digging in the wrong place could shut them down for days if he cuts the internet cable. At my company we have used three different ISP's in 5 years and still we have whole days without internet access because they can't keep it running. It has gotten better, but I would still hate to rely on internet access to keep crucial apps running.

Storing backups offsite in a datacenter I could live with. If the connection gets cut I just couldn't get to some data for a short time, but if I couldn't even use my office programs because they were a cloud apps, that would shut me down.


RE: Weaken cload computing How?
By HrilL on 10/14/2009 11:40:45 AM , Rating: 2
We have 2 internet connections. One fiber and one coax each company uses different backbones in town and can route traffic either north or south depending on where its going. They both also have lines that connect to the university. We have dual routers that fail over if one dies. We pretty much have 99.95% up time. Each ISP also have a SLA of 99.95+% up time. The only thing that will take us down is a power outage and in that case we couldn't use our computers anyway. Plus we have UPSs that will keep the network up for about an hour if we kill the PoE ports.


danger, danger
By cludinsk on 10/12/2009 10:31:03 AM , Rating: 5
I guess you'd say the forecast for cloud computing is grey and stormy then?




Extent of Data Loss
By Cookoy on 10/12/2009 11:12:36 AM , Rating: 2
i don't think all data of all customers were lost!
the lost info are no longer accessible on the SK
phones only, obviously. i don't think any reasonably
large company have no backup strategy. And if you're
into a business that is relatively on the cutting-edge
of IT technology, you would employ very competent and
highly skilled people. So i really doubt if all is
lost. Maybe some data or for those less fortunate
customers, maybe all their data, but definitely, not
all data of all customers!




RE: Extent of Data Loss
By JediJeb on 10/13/2009 3:37:54 PM , Rating: 2
Unless they were too certain of their backups and were working with a minimal safety net to maximize profits. Many companies have downsized to save money thinking the little things are not important only to discover later that what they cut out could bring them down.


Danger
By aguilpa1 on 10/12/2009 1:25:03 PM , Rating: 2
yes, how appropriate, it is dangerous to store your data with them. Its not like they didn't warn you.




RE: Danger
By stmok on 10/13/2009 12:27:44 AM , Rating: 2
Microsoft + Danger + Cloud
=> Some people just don't get life's little hints. ;)


Danger got too proud...
By Amiga500 on 10/12/2009 10:46:34 AM , Rating: 2
Of the technological terror they had constructed...




Hahaha
By Totally on 10/12/2009 11:41:36 AM , Rating: 2
I can imagine the how the notice to sidekick owners reads

"Dear Sidekick Owner,
We've accidentally all your data. Have a nice day!

Love,
T-Mobile"




RAID Array
By btc909 on 10/12/2009 1:41:42 PM , Rating: 2
A faulty controller can corrupt an entire RAID Array. I've seen this happen before. Luckly the backup was able to recover the data.




Armageddon is nigh!
By Hieyeck on 10/13/2009 8:38:08 AM , Rating: 2
So long as Skynet stays caged and they don't use it to try and fix the problem.




I wonder...
By japlha on 10/13/2009 1:56:35 PM , Rating: 2
if they were using a developer as the DBA and System Administrator? Maybe using file system backups to backup open database files? I'm certain that no testing was done on backups to check if recovery is even possible.

I've seen this many times with Windows running SQL Server.
It's easy to setup and "get going" but no thought to disaster recovery is given.

It's almost impossible to completely lose data if proper backup and recovery procedure are understood and in place.

Wonder if the job was given to the lowest bidder?




Danger?
By paydirt on 10/14/2009 10:03:04 AM , Rating: 2
I love ironic names.




I find it hard to believe...
By Motoman on 10/12/09, Rating: -1
RE: I find it hard to believe...
By Smilin on 10/12/2009 11:23:43 AM , Rating: 1
What does raid have to do with backups?

Raid is for preserving server uptime, not data. That's what backups are for.


RE: I find it hard to believe...
By Motoman on 10/12/09, Rating: -1
RE: I find it hard to believe...
By Lord 666 on 10/12/2009 11:39:39 AM , Rating: 4
Sorry, your fired.

RAID is to keep uptime as close to 99.999 as possible. However, RAID arrays can become corrupted or the building can burn rendering everything useless. Backups (especially offsite rotations or over WAN to hotsite) will ensure business continuity.


RE: I find it hard to believe...
By Motoman on 10/12/2009 11:57:14 AM , Rating: 1
http://en.wikipedia.org/wiki/RAID

A RAID array provides higher uptime as well as better data reliability. So before firing anyone, you might want to learn what you're talking about.

I'm in no way conflating RAID arrays with offsite backups. I'm just saying that it is perfectly clear that the proper use of basic redundant storage, like a RAID array, would make it very difficult to lose *all* your data at once.

Sure, the building can burn down, the array can corrupt etc. But every array corrupting at the same time? Don't think so. And we know the data center didn't burn down so...


RE: I find it hard to believe...
By jimhsu on 10/12/2009 12:24:50 PM , Rating: 5
You are assuming that there was a hardware failure - I don't think that was mentioned. Perhaps they messed up the MFT of their super RAID 6+1+1 array or an employee was surfing porn under a root account, or even someone (accidentally or purposely) did a rm -rf or dd.

Those "other" forms of failure are what backups are for. RAID doesn't help.


RE: I find it hard to believe...
By Motoman on 10/12/09, Rating: 0
RE: I find it hard to believe...
By MatthiasF on 10/12/2009 2:15:29 PM , Rating: 4
Have you ever done a massive data recovery on a RAID array? There's no way to easily check the data to make sure things were recovered properly. You're going to get a great deal of error in recovery from disc in case of hardware or software foul-up. Errors that will make files unreadable or unusable.

RAID does not remove the necessity for reliable backups. As other's have said above (and rated down for no good reason), RAID is only a way to keep the server running when a problem crops up. It is not a reliable backup method, not even RAID 1.


RE: I find it hard to believe...
By Motoman on 10/12/09, Rating: 0
RE: I find it hard to believe...
By ViroMan on 10/12/2009 4:48:23 PM , Rating: 2
Only thing that comes to mind is an infection... THAT would make them say that all of the data including backups were lost if, a number of the backups have been infected for some time. Better for them to announce that everything was lost due to a possible crash then for them to let out that they were compromised by an infection and loose everyones trust in security.

I could forgive a crash that looses all my stuff (once if at all) because, it might be out of there hands, but to have my stuff infected would make me not trust there security for a very long time.


RE: I find it hard to believe...
By Lord 666 on 10/12/2009 2:24:09 PM , Rating: 2
Your above post just proves you are missing the entire point and along with your clear lack of enterprise IT experience.

RAID is just a small piece of the BC/DR puzzle. As Kenobi said, more than likely it was part of a upgrade window. I'm thinking they hosed the entire LUN or even SAN and that was their only data storage.


RE: I find it hard to believe...
By Motoman on 10/12/09, Rating: 0
RE: I find it hard to believe...
By Lord 666 on 10/12/2009 3:18:26 PM , Rating: 2
Ok Moronman, believe whatever you want. No point of yours was missed, but you didn't have a valid discussion to begin with.

Since you are just tring to say that data forensics can recover striped volumes or even RAID 1+0 across multiple disks. Yes, there is a slight chance some information can be recovered. So here is a link for the Danger team - http://www.salvagedata.com/raid-data-recovery.

However, due to your lack of vision, experience, and professionalism (arguing a moot point with several people), you are still fired. Even better, based on what you posted today, would not have hired you in the first place.


RE: I find it hard to believe...
By Motoman on 10/12/09, Rating: -1
RE: I find it hard to believe...
By Black69ta on 10/13/2009 1:11:50 AM , Rating: 1
Sorry, but I agree with motoman here. to Completely lose All Data on a RAID seems like it would be a rare occurrence. What the point of R EDUNDANT A rray of I ndependent D isks if the risk of Physical Data Loss is not reduced. I don' think they would choose an array of only a few disks so with the more they use to build the array the smaller chance to lose data, assuming Redundant array instead of performance. Most DATA farms have redundant power supplies and line conditioners/ battery back-ups. So to have a widespread Catastrophic Data failure seems to me like only an outside DATA Corruption, ie... Virus, or some other kind of attack.


RE: I find it hard to believe...
By talikarni on 10/12/2009 4:46:11 PM , Rating: 2
Microsoft tech support: yes now type format X: -rf /all
(several hour pause)
ok now go buy the proper Windows Server 2008 again


By lainofthewired on 10/12/2009 12:49:52 PM , Rating: 5
Wow, nerd fight.

Proceed.


RE: I find it hard to believe...
By jimbojimbo on 10/12/2009 2:39:19 PM , Rating: 1
If you believe a simple RAID system can protect your data, you are dumb. My take from the article was that there was some sort of database corruption. If you have a large database on any sort of RAID volume and it becomes corrupt and various tasks run trying to recover it that disks gets written to over and over. Even IF you could recover the database files it is highly likely some piece will become missing which screws the whole database.

I hope you enjoy your RAID setup at home thinking you're completely safe until your controller card dies or someone deletes a large file without you finding out about it until days later after several writes already.


RE: I find it hard to believe...
By Motoman on 10/12/09, Rating: 0
RE: I find it hard to believe...
By TSS on 10/12/2009 7:39:13 PM , Rating: 2
hmm yes this looks like a good place to reply.

In the summer of 2005 i started my IT internship at a school community here in holland. They had about 5000 students and 800 teachers on the same network on 6 different locations connected by fibre. And they told me their backup system:

A raid 0+1 array in every server, a tape backup making incremental tapes daily and full backups every friday, Which was then also sent through fibre to a NAS in the nearby town.

With that system they lost 3 weeks worth of data. Why? Because in the summer holiday, there was 1 very hot day. The server room was ventilated as best they could, but temperatures got up very high (for around these parts). This heat caused the memory to corrupt in several servers.

This corrupted data then got onto the harddrive, the raid 1 system, At night it was backed up into the back ups and that backup was then sent to the NAS in another town. The last backup they had was the last tape they physically pulled from the tapestreamer and put into the safe next door.

The moral of the story? No matter how fullproof you think your system is, it isn't.

Until that time RAID's fine for home use. Raid 3 or above with a tapestreamer for business use. anything above that, unless it's very easy to set up will cost more then it'll benifit.

(btw, why would someone delete a file? also, if the controller card dies if your using RAID 1, that shouldn't affect the data, right? More then that for home use is overkill anyway)


RE: I find it hard to believe...
By JediJeb on 10/13/2009 4:05:52 PM , Rating: 2
quote:
( btw, why would someone delete a file? also, if the controller card dies if your using RAID 1, that shouldn't affect the data, right? More then that for home use is overkill anyway)


We came within a few days of losing a couple important files because someone used Cut instead of Copy, then never put the file back on the server. Luckily it was found just before the weekly backup overwrote the last full backup. Easy for an average user to delete something by accident.

Then of course there is the guy here who places several copies of his files in different folders on the server thinking he is "backing them up" because there are multiple copies lol.


RE: I find it hard to believe...
By gstrickler on 10/12/2009 1:10:46 PM , Rating: 2
RAID and Backups serve completely different purposes. The sole purpose of RAID (other than RAID 0) is ON-LINE disk fault tolerance. It protects against disk hardware failure on one (sometimes more than one) disk. Implemented at the OS/driver level with two or more controllers, it can also protect against a disk channel or controller failure when configured correctly.

While some RAID setups can improve performance, performance is secondary to data protection with all RAID other than RAID 0. RAID 0 isn't really RAID because it isn't redundant. RAID 0 exists for performance and/or capacity reasons, but it provides no redundancy and no data protection.

The purpose of backups is to protect against massive failure (fire, flood, disaster, extended power failure, multiple drive failue, server failure, etc.) and/or software failure, and/or non-disk hardware failure and/or user error (e.g. accidentally overwrite or delete data) and/or data corruption. Backups should be stored off-line/near-line and off-site. Backup is far more comprehensive coverage than RAID.

RAID is not a substitute for a good, automated, multiple generation backup system. If any of your data can't be replaced from another source, in less time and/or with lower cost than a backup system, then you need a good backup system. Of course, if your data can all be replaced from another source, that is a type of backup, therefore, if your data is valuable, you need a backup system. If you haven't successfully tested restoring from your backup recently, then you don't have a backup.

Likewise, backups are not a substitute for RAID. If you can't easily and cost effectivelly recover/re-enter data that could be lost between backups, and/or you can't afford downtime, you need RAID.

Two separate types of data protection, with two different purposes, and minimal overlap between them. If you still think that RAID is a backup system, I would fire you also.

And now, for everyone's amusement, some "Cloud computing" related tag lines I use in my email. The last one is particularly appropriate to this situation. I wrote all of these, you're welcome to use them, but I do request that you attribute them to me:

Cloud computing - "smart graphics terminals" and centralized computing with a catchy new name.

Cloud computing - step 30 years into the past. Welcome to the post-PC world, and back to the days of mainframes and mini computers, but with pretty graphics.

Cloud computing - let's just pretend the personal computer revolution never happened.

Cloud computing - everything old is new again.

Cloud computing, the bane of the future.

- Geoff Strickler


RE: I find it hard to believe...
By rippleyaliens on 10/12/2009 2:32:14 PM , Rating: 2
The killer with all of this, is the single one point that proves many points..... Human arrogance. I cannot tell you how many customers are just 100% arrogant in the way they do things. I had 1 customer bragging about his 16TB SAN storage. how it can survive multiple drive failures, etc.. I mentioned to him, 1TB drives (at the time), do you have ANY idea how long it will take to bring up a hot spare with 1TB of data? He actually had 14 TB of data, in a raid 6.. WELL that will survive 2 drive failures, but the rebuild takes DAYS!!!.. not moments.

And here is the other killer. BACKUPS.. 1 Single point that most people just wont commit to. An idea is this, and i will pick on a dell Equallogic iscsi san. VERY fast, very nice, very sweet. I actually love those, as IT WORKS very nice. BUT!!!!! for every 3U, that is a possible 16TB of raw disk. IF you did raid 50, in whith 2x 8 drive raid 5 arrays that are striped. ya have 14TB of data. VERYYY SWEET,, EXCEPT for when it is time to back up the the very data. 14TB of data, = getting DELL's top of the line backup library. and even THEN it will only do 4.3 TB backup, PER HOUR and that is with 10x LTO-4 Drives. 180TB total native, x2 with compression. But the killer is the el-cheapo dell tape auto loader, cost 20k (no tapes) for a dual lto-4 tape drive. IN which does barely 1tb per HR, (WHICH IS ALOT!!!!!!! of data)..

Companies are doing multi SANS, IE 2-3 storage devices, and relying on replication, and the hardware. NONE of this protects against 1. Act of GOD (natural disasters, thunder, storms, floods, etc...), 2. Loser error. Customer accidently deletes, passes virus, not knowing better, 3. The killer, Disgruntal workers, sabatoge, etc.. OR Getting their account hakd... HEHE..

Good luck Cloud computing. The upfront cost are cheap, server hardware, raid arrayssss.. are CHEAP..
Buying, implementing, and MORE SOOOOOOOOOOOO TESTING!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! !!!!!!!!!!!!!!!!!! the actual DR of a solution, IE the SOFTDOLLARS, are the ones that cost..
Backup solutions
Backup hardware
Backup maintinance..
Testing of the said solution..


RE: I find it hard to believe...
By rippleyaliens on 10/12/2009 2:47:39 PM , Rating: 2
I WAS RIGHT,, comes to find out, there was a SAN upgrade, YET no-one did this little thing called a BACKUP.. Rule number 1,
2 is one and 1 is NONE.. MEaning backupbackupbackup.,, rofl..
and then TEST the backupbackupbackup.. If it takes days, then it just takes days, to test.
MANY years ago, 10, well not many but it feels like it. I was doing a upgrade to a server. A compaq, back in the day. DID a backup, ran the compaq upgrade, (back then it was all on CD, for the server bios, the raid ctrl bios, etc..).. That worked. Ran updates for windows. NT 4, in the day. Did a IE update.. BOOM it trashed the raid controller. Naturally was a not known issue at the time. Well long story short, did a restore, from tape, FULL server, granted only 2GB back then, ROFL..
My boss called me, asked for progress. Told him situation, and i messed up, and said , "luckly we did a backup before doing upgrade".. STOP!!! He said to me, and i keep this in my memmory, and it has helped me NUMEROUS times..
There is no LUCK, YOU are a professional, i pay you to be a professional. Luck doesnt mean praying that it works.. Doing backups, making sure you have a backOUT plan, is what i pay you for. What the customer PAYS you for. LUCK is for rookies, We are IT professionals, Backup is the simple thing in IT world, that will come back to bite you if you are arrogant to believe in luck, with regards to Servers....

Stuck with me to this day. AND many aday, Acronis/Backup EXEC, Comm VAULT, etc.. has proven my point..


RE: I find it hard to believe...
By Lord 666 on 10/12/2009 4:05:45 PM , Rating: 2
Well said and a sign of a true professional. Moronman... take note.

Did they upgrade their SAN unit or perform LUN maintenance?


RE: I find it hard to believe...
By Motoman on 10/12/2009 7:21:00 PM , Rating: 2
Wow - "Moronman" - did you come up with that all on your own?

Also, please do explain what part of this gentleman's perfectly fine bit of research (into what actually happened) and his life lesson about needing backups contradicts anything I was saying...such that I should take note?

I have my paper and pen ready, oh great dark lord...


RE: I find it hard to believe...
By Lord 666 on 10/13/2009 1:22:29 PM , Rating: 2
Sure, if you are seriously looking for mentoring;

1. Humility - Learn to admit when you are incorrect and take in what others are saying as building blocks. Even after myself and others pointed out you were incorrect, you still argued otherwise. Rippleyaliens understood when his boss corrected him. I would even assume this manager played a larger part in mentoring him as a whole

2. Attitude - Drop the arrogance ASAP. No one wants to deal with some who cannot take direction nor constructive critism... especially management. There is someone always smarter, more talented, and more experienced than you.

3. Profession development and training - VMWare and Microsoft provide many conferences and training resources for free. Let's look past the irony that Danger is a Microsoft owned company, but overal their internal IT practices are well respected. One other benefit of these shows are the multiple vendors.


RE: I find it hard to believe...
By Master Kenobi (blog) on 10/12/2009 11:42:30 AM , Rating: 2
RAID protects against hardware failures only. What likely happened was the data was corrupted and/or in a format that can not be pushed back to the sidekick phones. RAID would not protect against that.


RE: I find it hard to believe...
By Motoman on 10/12/2009 11:58:25 AM , Rating: 1
...granted - so if we're presuming they didn't have a massive simultaneous number of hard drive suicides - what the hell were they doing to the data such that they lost it all at once?


RE: I find it hard to believe...
By Master Kenobi (blog) on 10/12/2009 12:00:37 PM , Rating: 2
Probably a system upgrade of some kind, that would be high on the likely candidate list.


RE: I find it hard to believe...
By jimhsu on 10/12/2009 12:27:45 PM , Rating: 2
rm . bashrc -rf
(note the space)

On a root account.

That might hurt...

It's failures (human failures) like these that backups are for.


RE: I find it hard to believe...
By MrBlastman on 10/12/2009 1:07:34 PM , Rating: 2
...and we are all human. Back in the day years ago when I was running the tech end of a .com, one morning I came in deprived of sleep (common when you work a tech job) and went to work on another normal day. I was pruning some sections of our server and searching for a few things when I accidentally... whoops, deleted a chunk of data off of it.

Now, I had a RAID array set up, etc.--but none of that would do me any good in this situation since the mirrors already had overwritten the data on their drives, so, I was forced to go to incremental backups. We had a tiered system, incremental daily backups (stored on disk), nightly backups (via tape) and a weekly backup tape file where we kept the weekly tapes for a month or two. I'd say it worked pretty well.

Well enough so that I was able to use the incremental backup to restore the data quietly in such a manner that nothing appeared to ever have happen. The whole incident was--swept under the rug. Now, had my human error caused a more catastrophic loss, I would have simply un-tarred the tape back onto the drive fixing everything that way.

RAID, as others have mentioned, provide uptime redundancy (that along with heartbeat mirroring servers). They don't save your data. Backups do that and you need them in a corporate environment.

Danger... had DANGER written all over them since they opened their doors. I feel bad for everyone who has lost their data and I can not understand why this company who was in business to serve, did not have backups of their clients data.


RE: I find it hard to believe...
By Ranari on 10/12/2009 12:09:03 PM , Rating: 2
Honestly, I don't get the cloud model. What advantage does spending tens, maybe hundreds of millions of dollars on server hardware, maintenance, cooling, and data redundancy/backup have over a $0.50 memory card that companies buy in bulk?

I mean, am I wrong in my way of thinking here? What's the big deal about cloud?


RE: I find it hard to believe...
By totallycool on 10/12/2009 12:46:40 PM , Rating: 2
The Big Deal is that, It is no longer branded as 'IBM Mainframe with OS/390'


By Master Kenobi (blog) on 10/12/2009 3:48:42 PM , Rating: 2
This would be incredibly funny if it wasn't true.


RE: I find it hard to believe...
By walk2k on 10/12/2009 12:35:01 PM , Rating: 5
They are stupid for storing data in the clouds anyway!

What happens if it rains??


RE: I find it hard to believe...
By donxvi on 10/13/2009 9:08:44 AM , Rating: 2
There's a name for that, it's called a BitTorrent.


RE: I find it hard to believe...
By The0ne on 10/12/2009 12:35:19 PM , Rating: 2
They didn't have any backups, plain and simple. Our company has a RAID server, our Engineering makes a backup of it every night and takes it home, we have a copy of it online and we have a backup on physical disk stored elsewhere. This is why data is so critical!!

I've done the same things for the department of energy as well.


RE: I find it hard to believe...
By MERKJONES on 10/12/2009 3:58:33 PM , Rating: 3
I think the argument of what they used or didn't use for backup is irrelevant, I think the argument of why didn't they back anything up - is more relevant. In reading this nerd fight on here, it made me realize I'm not that huge of a nerd. But big enough of one to wonder why the hell M$/Danger didn't backup anything when they had ALL the data in their cloud... I mean really? It's 2009, kids do backups for their home PCs now adays. Why couldn't they do it for data that affected EVERY sidekick user?

This is why I use Android ;)


Microsofts catastrophic mobile debacle
By Tony Swash on 10/12/09, Rating: -1
"I mean, if you wanna break down someone's door, why don't you start with AT&T, for God sakes? They make your amazing phone unusable as a phone!" -- Jon Stewart on Apple and the iPhone

Related Articles













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki