backtop


Print 11 comment(s) - last by ClownPuncher.. on Nov 19 at 12:54 PM

NC State developed software can be used with existing network protocols and hardware

When it comes to WiFi networks, the key to boosting speed may not lie solely in adopting new, faster hardware and software protocols, but also in developing better software to balance loads when networks get overrun with traffic.

Researchers at North Carolina State (NC State) have developed a program they call WiFox, which dynamically adjust channel priority for different WiFi access points, depending on usage.

At 25 users the system showed a 400 percent gain in throughput, while at 45 users the system sped the network up 700 percent versus traditional networking software.  Best of all, the researchers say their program plays nicely with existing protocols and network hardware without the need for an upgrade.

The only potential downside is that if by some unfortunate occurrence all the access points in a region were overloaded, the gains might be diminished, hypothetically.  But for most scenarios where some areas are swamped and others underutlized, the dynamic prioritizing concept could offer a big step forward.

The researchers are presenting their work at the ACM CoNEXT 2012 conference in Nice, France.  The paper's authors are Arpit Gupta (lead author), a Ph.D. student in computer science at NC State, Jeongki Min, a Ph.D. student at NC State, and Dr. Injong Rhee (senior author), a professor of computer science at NC State.

The research was funded by the National Science Foundation.

Source: NCSU [press release]



Comments     Threshold


This article is over a month old, voting and posting comments is disabled

How Is This Different?
By Stiggalicious on 11/16/2012 12:25:26 PM , Rating: 4
I've been working with Cisco wireless APs for years, and they have a dynamic load balancing feature that swaps channels constantly among multiple APs based on user demands and locations. When I turned the feature on on our university's wireless network, speeds jumped up significantly. How is this different than what USC did?




RE: How Is This Different?
By Etsp on 11/16/2012 12:38:57 PM , Rating: 3
I read the source link, and I read the daily tech article, and I find that I'm coming to a completely different understanding than Jason did when he wrote the article.

What the source link describes is when you have one access point serving a large number of users, there are significant performance issues, and NC State's proposed solution to some of those issues.

It's not some sort of coordination between access points, but rather it's a means of dynamically giving priority to the access point to transmit a backlog of data over the users within a given WiFi channel.

Interestingly, the abstract in the source link specifies that the 400% increase was in "downlink goodput", not overall throughput, but that caveat wasn't listed anywhere else...


RE: How Is This Different?
By name99 on 11/16/2012 2:40:44 PM , Rating: 3
Goodput IS what you want to measure. Throughput refers to, essentially "number of bit transitions" including bits that are used to run the protocol, as headers, as packets that are dropped at the router for lack of buffer space, etc.
Goodput refers to the throughput of REAL data --- how many bytes per sec of MY data do I see leaving my PC.

ALL network measurements anywhere that are of interest to the public should be of goodput. The only time throughput should ever be mentioned is in technical papers dealing with modulation and protocols, where the target audience knows the difference and understands the relationship between the two.


RE: How Is This Different?
By Etsp on 11/16/2012 3:38:27 PM , Rating: 2
I wasn't sure what goodput is, so I included it. However, the "downlink" portion of that was the caveat I was referring to. Thank you for your explanation goodput though.


Heavy users beware.
By drycrust3 on 11/16/2012 10:15:12 AM , Rating: 1
quote:
The only potential downside is that if by some unfortunate occurrence all the access points in a region were overloaded, the gains might be diminished,

As I see it, load balancing software would effectively limit the bandwidth of all users, but especially heavy users. For them, they would find the system slow and unresponsive. Light users would find the system works well.




RE: Heavy users beware.
By x10Unit1 on 11/16/2012 10:31:26 AM , Rating: 2
So heavy users have priority over regular users?

Besides, if you finish regular user requests faster there will be more bandwidth for the heavy users to fight over.


RE: Heavy users beware.
By Etsp on 11/16/2012 11:57:03 AM , Rating: 3
As I see it, this technology has nothing at all to do with individual users or their habits.

Since the access point and users are communicating on the same channel, they have to take turns transmitting.

If the access point starts developing a backlog of information to transmit, its priority to send that traffic goes up. The bigger the backlog, the higher the priority.

The effective limit that you are referring to is the physical limitations of how much data can be transmitted in that channel over a period of time. This is a hard limit, and while newer technologies can increase that limit, it usually requires new hardware on both ends.

This software simply provides a means of using that limited space more effectively, improving everyone's throughput.


RE: Heavy users beware.
By name99 on 11/16/2012 2:34:56 PM , Rating: 2
I don't know what they've done, but this may well not be the case.

The problem with WiFi is that the MAC is crap. By insisting on a distributed MAC rather than a central controller (like the cell phone protocols) a MASSIVE amount of time is spent with either everyone waiting and no-one transmitting, or multiple people transmitting at once and needing to retransmit.

The RIGHT way to fix this would be to adopt a similar MAC (including something like dedicating one OFDM channel to a RACH, so that very short packets and requests to transmit can be fielded without slowing down the bulk system).
In the absence of that, you could imagine software running on every device accessing the network which is essentially doing the same thing --- scheduling who goes when, and handing out those slots. Such a system would require tightly synced clocks, and you would have to reject from the network users who aren't using the custom scheduling software. So it would be feasible in closed environments (eg corporations or universities) but not public environments.

The REAL solution is for the damn IEEE to copy 3GPP (or heck, copy WiMax, I don't care) and use a MAC that doesn't date from 70s.


Pic
By ClownPuncher on 11/16/2012 12:09:03 PM , Rating: 2
What the hell is that picture? I think you could do better.




RE: Pic
By OCNewbie on 11/16/2012 8:29:13 PM , Rating: 2
/jokerface

Not sure if serious

http://www.youtube.com/watch?v=FL7yD-0pqZg


RE: Pic
By ClownPuncher on 11/19/2012 12:54:29 PM , Rating: 2
I'd never seen it. I don't watch the TV!


"We are going to continue to work with them to make sure they understand the reality of the Internet.  A lot of these people don't have Ph.Ds, and they don't have a degree in computer science." -- RIM co-CEO Michael Lazaridis

Related Articles
WiGig Specifications Completed
December 10, 2009, 11:16 AM













botimage
Copyright 2014 DailyTech LLC. - RSS Feed | Advertise | About Us | Ethics | FAQ | Terms, Conditions & Privacy Information | Kristopher Kubicki