backtop


Print 14 comment(s) - last by GTJayG.. on Oct 27 at 12:23 PM

Yes, they can even play Crysis

The idea of so-called "thin-client" virtualization is simple and not terribly new.  The idea is basically you take a very powerful server computer and create a number of hosted environments on it, and then stream video to cheap, low-powered desktop clients.  The hosted environments receive all the input (say keyboard and mouse) from the clients.  

I. Power Efficient CPUs Drive HP's Growing Thin-Client Army

While the idea itself is not all that new, low-power processing and broadband streaming technology has only just now reached the point where server makers can come up with the products to make it attractive and affordable on a wide scale.

One of the key companies leading the way is Hewlett-Packard Comp. (HPQ), which is looking to leverage emerging processor design to offer businesses cheap and capable thin-clients.

It's current demoing two designs -- one based on a 1 GHz Texas Instruments Inc. (TXN) chip using an ARM Holdings Plc (LON:ARM) Cortex-A8 intellectual property core + instruction set, and the other using a 1.65 GHz dual-core T56N "accelerated processing unit" (APU) from Advanced Micro Devices, Inc. (AMD), which comes with a built in Radeon graphics processing unit.

These cheap, low-power processors from TI/ARM and AMD prove the key to HP's ambitious thin-client vision.
 
HP t610                      HP t610
A pair of HP t610 thin-clients (powered by AMD APUs)

(The ARM product is dubbed the HP t410 AiO Smart Zero Client, while the AMD APU variant is named the HP t610 Flexible Thin Client.)

Michael Clifford, HP Director, Cloud Computing, UK&I, spoke about showing the new designs at the upcoming IP EXPO, which is being held this Wednesday and Thursday in London.  He comments, "The Cloud Computing space is constantly accelerating and maturing, and it is important that IT departments are not left behind.  At this year's IP EXPO, we will demonstrate how our secure cloud solutions establish the ideal foundation, allowing businesses to become the 'service broker' and decide how and where to deliver IT services, opening limitless horizons of new abilities and powers."

So what does all that mean?

Your employer may soon be replacing your full-fledged PC (expensive) with a small, lightweight, power-efficient thin-client (cheaper).  But you won't see any performance hit (in-theory), thanks to the beefy server back-end.

II. Enabled by Windows 8

Speaking of that back-end, that gets back to a final player in both HP and its rival's thin client experience -- Microsoft Corp. (MSFT).  Microsoft's latest Windows 7 and Windows Server operating systems are offering a technology called RemoteFX, which allows the server backend to deliver a so-called "remote desktop" (the virtualized workspace) to the thin-client.  Microsoft shows off the might of RemoteFX in a recent video, showing that yes, a thin-client really can play Crysis.


While Windows 8 is oft demonized for introducing the controversial Metro reskin of the start menu and other features businesses don't care as much about (say, touch), the upcoming OS is good news for businesses when it comes to thin-clients.  Why?

Well, the latest and greatest upcoming thin clients such as the HP t410 AiO that use cheap, power-efficient ARM processors -- the same chips used in smartphones and tablets -- won't run on Windows 7, Windows Vista, or Windows XP.  Windows RT -- the ARM-centric version of Windows 8 -- will be the first Microsoft operating system to add support.  In that regard, the ability to leverage ARM-endowed thin clients could prove a key carrot to driving businesses to Windows 8 -- something that would doubtless please Microsoft.

Sources: HP [1, press release via IT News Online], [2]



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

I'm sure this would be great for business...
By MZperX on 10/16/2012 2:02:15 PM , Rating: 5
... but I also wanted to do this for a long time on my home network. So, instead of building individual machines and upgrading them and dealing with updates/patches/service packs/backups on an individual basis, I just wanted to put a thin client on my kids' desks and run terminals using my workstation. In a wired home with Gigabit Ethernet this is a definite possibility.

If they make this a reality in Windows 8 I'm going to seriously consider it. I would just build a monster workstation (my current one isn't bad but this would give me an excuse to upgrade) that could run the other virtual terminals effortlessly. That way I'd only have to deal with maintaining one machine. It seems it would also be more cost effective in the long run.

I just need to research the LAN gaming aspect of this. I'd hate to find out that multiplayer games (e.g. Arma II or TF2) did not work with this setup. That would defeat the purpose...




RE: I'm sure this would be great for business...
By Shig on 10/16/2012 3:45:11 PM , Rating: 2
Wouldn't the terminals each need their own dedicated GPU in the master box to run a game at high quality settings in HD quality?


RE: I'm sure this would be great for business...
By MZperX on 10/16/2012 5:23:22 PM , Rating: 2
Good question, I've been wondering myself. Quite possibly the answer is yes, in which case this is still a bridge too far for gaming applications.

The dumb/thin terminals could be relatively low resolution (1680x1050 or 1440x900) and I'd still be okay with that, but ideally the "mainframe" machine would have to do all the graphics crunching for them. So, there would have to be a way to allocate certain number of rendering pipelines to each virtual machine if they are to run on a single graphics card. I can see this happening on something like a Radeon 7970 with 6GB VRAM if the graphics driver supported it. Afterall GPUs are supposedly awesome for multi-threaded tasks. There would of course be a limit as to how many virtual terminals could be served with graphics at an acceptable FPS, but even if it's only 3 or 4 that would be workable. I guess a future hexa- or octa-core CPU could handle the processing end of this without too much trouble.


RE: I'm sure this would be great for business...
By Jeffk464 on 10/16/2012 5:37:30 PM , Rating: 1
Terminals are better served running chrome OS than windows.


RE: I'm sure this would be great for business...
By Alexvrb on 10/16/2012 8:23:04 PM , Rating: 2
Oh they've got RemoteFX for ChromeOS now?

Heck, even all that aside, I'd go for a popular Linux distro before ChromeOS. :/


RE: I'm sure this would be great for business...
By Samus on 10/17/2012 1:53:39 AM , Rating: 2
I'm not so sure, Windows 7 Thin PC is 2GB installed and runs on 512MB RAM and a Pentium III well.

The compatibility pro's outweigh any con's (unless the $35 COA is too expensive for you cheap ass linux fans)


By Alexvrb on 10/19/2012 2:01:00 AM , Rating: 2
Oh I agree. I was just saying that if you're going to go with "Hey it's free!", why not just use a superior non-Google-dominated Linux distro?

I didn't mean to imply that he is correct in assuming that Windows is unable to run well or are otherwise inferior on low end hardware. I mean shoot, WinRT runs like butter on a lowly T30, and as you pointed out they've got slim versions for x86 clients that work great on lowly hardware.


RE: I'm sure this would be great for business...
By GladeCreek on 10/17/2012 8:59:58 AM , Rating: 2
As is stands today, GPUs cannot be virtualized - a necessary step to do what you're after. Right now, it's a 1 to 1 mapping of a GPU to a session. Hopefully it's just a driver thing, but I'm thinking they aren't there yet with the hardware. Heck we're only just now getting virtualization for NICs (SR-IOV), and that requires the right hardware from the card all the way through the motherboard to the CPU, and the right drivers, and the right os. Server 2012 is the first MS os to support it. Me thinks it'll still be some more time on the GPU side, but in a household, you may be able to load up enough GPUs in a central workstation to make it work.


By GTJayG on 10/27/2012 12:23:44 PM , Rating: 2
You're wrong on the virtualized GPU comment. I'm helping support a 2012 W8 project that's virtualizing GPUs expressly to provide the needed firepower for CAD program usage on thin clients in a university setting.


Good idea, but bad for games
By Khenglish on 10/16/2012 3:11:05 PM , Rating: 2
I'm pretty sure this idea will never catch on for games, but it has other uses.

This idea adds on the latency of communicating over the internet for user actions. Even when playing an offline game everything you do will be noticeably delayed. Some people can feel and have issues with monitor latencies over 30ms, and that delay is only display lag, not user actions lag. Throw at least 50ms on top of that for not only display lag, but input lag from the keyboard and mouse, and you have a problem. Modern lag compensation is already problematic (anyone play BF3?), and I doubt it can help at all for this. Could be fun for a few days if you have nostalgia for the old online gaming days over 56k with a P3 and no hardware mouse, but I think that will wear out fast. For something slow paced like civilization this will be acceptable, but fast paced games like SC2 will be problematic, and fast paced games that require precise clicks such as fps games will be very frustrating.

I'm very curious about how they got Crysis to run in on their demo. An uncompressed 32bit 1920x1080 framebuffer is 8.3MB per frame. The nearly worthless 8 transparency bits could just be dropped off, and I suppose that since most monitors are only 18bit this could be cut down to 4.7MB. To get just 30 fps at 18bits would require 140MB/s over the internet. The framebuffer could be compressed by the server and decompressed by the user's computer, but the PGN format would only cut the image size by around half, while other higher compression formats would impact visual quality. 18bit 1280x720 at 30fps compressed as PGN format would require 31MB/s. I don't think most people have internet that fast. Maybe HP hooked up the thin PC to the server directly with a LAN line?

I see good uses outside of games though. Pretty much anything that requires heavy calculation without requiring fast user input should be OK. Doing college research and need to calculate some protein folding scenarios but you only have a smartphone on you? Send it to a server.

Another interesting idea is if they don't use a server and just do cloud computing on other thin PCs. Computers are usually just idle so the network of thin PCs should be able to support the small fraction at a time that are actually doing heavy computing tasks. It'll only work for highly parallel tasks, but ~100 APUs should get you pretty good performance.




RE: Good idea, but bad for games
By Khenglish on 10/16/2012 3:18:51 PM , Rating: 2
Just watched the video and it looks like Crysis is dropping to below 10fps at times. When the fps drops you can see that motion is still smooth in the background, so it's not the video recording quality. Looks like the frame rate is worse than what the thin PC would be getting if it just used its own APU.


RE: Good idea, but bad for games
By Alexvrb on 10/16/2012 8:21:39 PM , Rating: 2
That's funny, I didn't know Crysis ran on extremely low power ARM chips! Oh wait, you didn't listen to what the HP guy said, at all. Or else you'd know the REASON that it ran so poorly was that current (enterprise-oriented) iteration of RemoteFX only allocates up to 200MB of video memory per thin client.

If a home version of RemoteFX comes out, it will have a lot more effort put into gaming. They were just demonstrating that it could indeed stream a DirectX title to a thin client, which on its own is completely unable to run such software.


RE: Good idea, but bad for games
By Rukkian on 10/16/2012 3:44:50 PM , Rating: 2
While I don't think these will be used for gaming, you are comparing different things. These will not have a server on the internet to get their guest os from, it will come from a server (usually LAN). Companies have had stuff like this for awhile, but it was always the bandwidth that was the issue, even on a LAN, 100+mb/s non stop is not really scalable without massive network infrastructure.

The only difference here is that you can now use cheaper terminals, since they are arm based instead of x86.


Why is this cool?
By deeceefar2 on 10/17/2012 2:38:48 AM , Rating: 1
It's been pretty obvious to me for a while that the future of computing is thin client with software as a service powering the apps. This is the stop gap solution to get the every app running in the cloud even if it isn't designed for it. A perfect usage for this iteration right now is the 3d rendering company, or for that matter any company that currently has to have workstations with legitimate graphics cards in them. Instead of having to upgrade individual workstations, you can deploy a single thin client to each end user, and those don't ever have to be updated. Then as you need you upgrade the servers in the enterprise cloud to handle the workload and increase performance as necessary. You could have one server designed for high end 3d applications serving just that app to users that need it and other servers rendering the less demanding ones. In the future this could allow any user running either a windows tablet, or smart phone to use any application no matter how demanding the graphics are. The biggest issue I see is that this solution is incomplete and under supported by the company, likely to prevent cannibalizing their own business. Meanwhile everyone else is having to just develop around the lack of this capability. Way to develop the software to save the company and then not get anyone to use it.




"There is a single light of science, and to brighten it anywhere is to brighten it everywhere." -- Isaac Asimov














botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki