backtop


Print 109 comment(s) - last by Tytus.. on Aug 9 at 10:33 AM


Mockup of the KillerNic
The KillerNIC adaptor may be the latest trend in gaming hardware

Bigfoot Networks has announced its Killer Network Interface Card. The new Gigabit KillerNIC is catered towards the hardcore gamer that requires every drop of performance possible from a gaming system. Utilizing a 400MHz network processor with 64MB of dedicated PC-2100 DDR memory, the Killer NIC has plenty of power to perform Gigabit transfer rates without hogging up too many CPU cycles.

MaxFPS technology frees up CPU cycles typically taken up by heavy network traffic by offloading the required processing onto the Killer NIC’s 400MHz network processor. UltimatePing technology lowers ping by optimizing data delivery to games faster while PingThrottle technology allows users to increase or decrease ping accordingly. GameFirst Technology prioritizes network packets for games instead of background downloading utilities such as BitTorrent.

While NVIDIA has implemented features similar to MaxFPS and GameFirst in the form of its FirstPacket and TCP/IP offload functions of the nForce 500 series of chipsets, the Killer NIC is the first standalone network card to offer such features. The Killer NIC is also upgradeable with its Flexible Network Architecture which allows anyone to code programs that can take advantage of the network processor. Bigfoot Networks’ Chief Architect claims “FNapps can be anything from simple gaming chat programs or servers, to full online gaming VoIP solutions.”  This could prove interesting if a game developer’s code game to take advantage of the Killer NIC’s processing capabilities for VOIP functionality.

In a world where nearly every enthusiast motherboard has onboard Gigabit Ethernet, Bigfoot Networks may have a hard time convincing gamers a PCI Ethernet card is needed for the ultimate gaming experience, especially since PCI slots are becoming scarce on newer motherboards.

The Killer NIC will be available starting on August 16th with no mention of pricing.


Comments     Threshold


This article is over a month old, voting and posting comments is disabled

RE: I have the solution to the mess
By goku on 7/13/2006 8:56:37 PM , Rating: 2
A second or even fourth processor won't make up for the fact that they're slower than a specialized processor. With that logic, we wouldn't need video cards. Dual processors aren't faster than a single processor with an ageia physics card. (For physics calculations anyways)


RE: I have the solution to the mess
By Scrogneugneu on 7/13/2006 11:35:02 PM , Rating: 1
If the physics calculations are handled by multiple threads, then yes, a multi-core CPU could handle them much, MUCH better than a current CPU.

Just try to seize what dual core is all about : you basically have the ability to compute twice as much information in the same amount of time . Now, nobody is gonna make an application taking a full 100% advantage of this doubled calculation capacity, since every application has some stuff that must be single-threaded. But you can significantly reduce the waiting by delegating the work to a second thread executed on another core.

If creating a frame is (for example) 100 cycles. Say 20 of those are for getting and sending info over to the GPU, 40 of them are for AI / game logic calculations and 40 are for physics emulation. If one can manage to organize his application to use the 20 cycles for sending info over to the GPU, use 5 cycles to send the information on physics calculations over to the thread in the other core, and 40 to make calculations on AI / game logic while the other core is calculating the physics effects, we end up with a total time of 65 cycles instead of 100.

Moreover, we can beef up the physics calculations to be made, as we are not calculating on a CPU that also has to switch to other threads in the background to run Windows at the same time. All in all, having 2 cores enables both cranking up physics and reducing calculation time. To achieve this, one only have to master an efficient threading pattern.

The same problem exist for Ageia, for example. The application still has to do the same tricks, but instead of sending info over the other core, it must send it over the PCIe lane to the extension card. True, the processor on that board is made specifically for those calculations, but the only benefit from it is it can handle a lot more stuff than the CPU, in the same time. The drawback? When you're not using a game with physics implementation (or when you do anything else on your computer that does not need specific physics acceleration), that processor stands idle, wasting a lot of cycles. The dual core will be useful in any situation.

More so, you would need a specialized proecssor for everything : sound, physics, Gigabit (??). With a dual core, the second core can help to do all these tasks at the same time. While not being as efficient at them when compared one on one with the competition, it does offer a very acceptable performance level on a very broad kind of calculations. And to top it all, you already have a dedicated processor for sound : your onboard sound processing engine. It gets the job done. You also already have a dedicated processor for controlling the Gigabit.

The main difference is that video cards do use the advantage that they can do more work than a second core could. The amount of video processing cycles required to create what we see on screen is huge. I highly doubt that this is the case with sound, gigabit or physics.

Adding more cores in the CPU is the solution. Who cares if we are using up 80% on one of our 4 cores to create sound and have good physics? We still have 3 other cores, ready to do the job we need the computer to do. Offloading the work from the CPU while the CPU are getting the capacity to handle much, much more work in the same time is nowhere near logical.


RE: I have the solution to the mess
By omniscient101 on 7/14/2006 12:11:55 AM , Rating: 3
I believe you're grossly miss-understanding/underestimating the complexity and difficulty of propper physics calculations for computers. Current x86 cores from amd and intel are not optimized to handle physics algorithms at all. I agree, netork and (most) audio traffic is in no way a bottleneck/cpu hog, but for physics, its going to be best left to a seperate 'specialized' processor.


By Scrogneugneu on 7/14/2006 7:15:12 PM , Rating: 2
I see several games with good physics engines. I have no external specialized processing unit to calculate them, and it runs on a single core processor.

If I wanted to, I could get a dual core and be able to handle much more physics. Yes, I could handle more with a specialized coprocessor. No, it wouldn't be useful for anything else than calculating physics effects.

The point is, do we REALLY need to have tons and tons of physics in games? Just take Half-Life 2, for example. Say you put 4 times more physics effects in the game. It can be run on a multicore CPU without any problem. Why in the world would you want to have 400 times more physics? I don't care if the wood I shoot on breaks in 7 parts rather than 5 big ones, 12 medium, 45 small and 200 miniatures pieces. I can still get them visually, they just wouldn't have any effect in-game. Would they really have an effect if it was calculated? No. Why calculate it then?


"DailyTech is the best kept secret on the Internet." -- Larry Barber

Related Articles
nForce 590, 570, 550 Announced
May 23, 2006, 4:41 AM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki