Concern regarding teachers and how they transfer anxiety to some students

A new report published in the
Proceedings of the National Academies of Science claims female
teachers who show an anxiousness in math can share that attitude with
female students.

The study was based on 17 first- and
second-grade elementary school teachers, along with 52 boys and 65
girls. At the beginning of the school year, researchers learned
boys' and girls' math achievement didn't correlate depending on the
attitude of their teacher.

"We are not sure
whether it's something overt, whether it's non-verbal behavior or
perhaps (teachers are) not spending much time on the subject,"
said Susan Levine, University of Chicago psychology and human
development professor, co-author of the “Female Teachers' Math
Anxiety Affects Girls' Math Achievement” study. "It's
not just a teacher's knowledge of the subject, but there' something
about their feeling about the discipline."

This is a
significant problem since there is a high demand for scientists,
engineers and mathematicians, as these are in-demand jobs that
help stimulate the economy. However, men continue to
dominate the engineering and IT jobs, and researchers believe it's
detrimental to research to "dismiss 50%" of researchers
because they are women.

The National Survey of Science and
Mathematics Education indicates more than 90% of all elementary
school teachers in the U.S. are women -- an inadequate level of
mathematics study is required to receive a teaching certificate,
which is something that may be addressed in the future.

By
TheEinstein
on
1/28/2010 11:14:29 AM, Rating: 1

Here, enjoy my invention in binary math, see if you can follow it

Step 1: Make 3 temp files if x= 00 then 0 in each file if x= 01 then 0 in files one and two, and 1 in file three if x= 10 then 0 in file one, 1 in file two, nothing file three if x= 11 then 1 in file one, nothing files two and three

Then use a binary compression algorithm on any of the temp files that will compress... or repeat the system if gains can be had.

This will bring maximum entropy to bear on ANY given file, with the least possible code to do so, and will not increase the over-head of any file by using a simple check vs size. Each temp file has a chance of statistical compression. This is the closest humanity will get to a 'random data compressor'.

Now when you can do theoretical binary math, follow large statistics thoroughly, or such, you can nay say someone saying what they say. Until then, how about you just say 'well I wont disbelieve unless the person gives me a basis to do so'.

The first file has a probability of 3/4 for 0 and 1/4 for 1. The average number of bits required to encode one bit from that file is (using Log for log base 2 and omitting parentheses around the fractions):

For every 8 bits of input we have, on average, 4 bits in the first file, 3 bits in the second file, and 2 bits in the third file. After compression, the number of bits your invention will end up taking for those 8 bits of input is:

quote: will not increase the over-head of any file by using a simple check vs size

It always costs you at least one bit to store the fact that you didn't compress at all...otherwise your decompressor will apply the decompression algorithm to the uncompressed data and return garbage. If you actually work out the details for your invention you will either stumble upon that bit or end up inadvertently hiding it somewhere in the file system metadata (such as the existence of multiple files).

Google for "Pigeonhole principle" for further details on why a lossless compressor must always increase the size of some inputs.

quote: Each temp file has a chance of statistical compression

...which is just enough to cancel out the bloat you added. One difference: the bloat is absolute while the compression is only theoretical, so using a real compressor will leave you with a net loss.

This is why your teacher wanted you to show all your work.

A) Pigeon hole of course always trumps, your stupid to think otherwise.

B) However many files can be compressed also... lower than their original size. Your stupid to think otherwise.

C) Your math is bad. You assume only based upon random data. I do not. If the file is uncompressable, it is UNCOMPROMISABLE.

The system is designed to be used upon skewed data sets. Re-run your math now to try to account for text, software languages, and the likes.

You tried, I will give you credit, however you completely failed to apply basic concepts in your attempt to act high and mighty.

As for showing how you got your work, "A Brilliant mind". He showed how he did some math through his life, but other maths he would just do, and never explain how he got the answer.

There are math geniuses who do not ever need to 'do the steps' to get the answer. They can adequately walk through complex math principles merely telling the answer. 10 internet points if you can name one of the conditions that allows this.

Furthermore, yes, its great that Fermat jotted random theorems in his notes and then noted "I can prove this... but I need to feed my cat". Well pat on the back for him, good job. Except that none of his work serves as a firm basis for other scientific works because of the lack of rigorous proof.

The main point however, was never that you didn't know math. The point of your middle school math class, or any other topical class wasn't to teach you the pitiful content matter at the time, but rather to teach you to follow direction and instruction.

But /thread.

If you want to go through life ignoring what people in charge tell you to do, I wish you the best of luck. You should go point a gun at a cop and then argue with him when he tells you to put it down