Friday, January 13, 2012

Aerospace wind tunnel

In this report I will talk about the wind tunnel. I will described what they are used for. The
different types of wind tunnels from the slow speed subsonic to the high speed hypersonic tunnels.
I will also give A few examples of the wind tunnels used today.
The wind tunnel is a device used by many people, from High school students to NASA
engineers. The wind tunnel is a device used to test planes to see how well it will do under certain
conditions. The plane maybe as big as a full size 747 or as small as a match. To understand how a
wind tunnel is used to help in the designing process you have to know how a wind tunnel works.
How Wind Tunnels Work
A wind tunnel is a machine used to fly aircraft's, missiles, engines, and rockets on the ground
under pre-set conditions. With a wind tunnel you can chose the air speed, pressure, altitude and
temperature to name a few things. A wind tunnel is usually has a tube like appearance with which
wind is produced by a large fan to flow over what they are testing (plane, missiles, rockets, etc.)or
a model of it. The object in the wind tunnel is fixed and placed in the test section of the tunnel and
instruments are placed on the model to record the aerodynamic forces acting on the model.
Types of Wind Tunnels
There are four basic types of wind tunnels. Which are low subsonic, transonic,
supersonic, and hypersonic. The wind tunnels are classified by the amount of speed they can
produce. The subsonic has a speed lower then the speed of sound. The transonic has a speed
which is about equal to the speed of sound (Mach 1 760 miles per hour at sea level). . The
supersonic (Mach 2.75 to 4.96) has a speed of about five times the speed of sound And the fasts
of them all the hypersonic (Mach39.5) which has a speed of more then 30,000 miles per hour.
Wind Tunnel Test
There are basically two types of wind tunnel test which are static stability and the pressure
test. With these two test you can determine the aerodynamic characteristics of the aircraft. The
static stability test the measures the forces moments due to the external characteristic. These
forces include axial, side and normal force, rolling, pitching and yawing moment. This forces are
found by using a strain gauge which is located on the external portion of the plane. It measures
the external flow fields. Then the shadowgraph is used to show the shock waves and flow fields at
a certain speed or angle of attack. There is also the oil flow which shows you the surface flow
pattern.
The pressure test is used to provide the pressures acting on the test object. This is done by
placing taps over the surface. The taps are then connected to transducers that read the local
pressures. With this information they the can balance out the plane. Then the static stability and
the pressure test data are combined to find the distributed loads.
Wind Tunnels Used Today
Wind tunnel vary in size from a few inches to 12m by 24m (40ft by 80ft) located at the
Ames Research Center of the National Aeronautics and Space Administration or NASA, at moffet
Field, California. This wind tunnel at Ames can accommodate a Full-size aircraft with a wingspan
of 22m (72ft). They also have a hypervelocity tunnel at Ames that can create air velocities of up


to 30,000 mph (48,000 km/h) for one second. This high speed is able to be done by placing a
small model of the spacecraft in a device that produces an explosive charge into the tunnel in one
direction, while this is going on there is another explosive charge that simultaneously pushes gas
into the tunnel from the other direction. There is also a wind tunnel at the Lewis Flight Propulsion
Laboratory also own by NASA in Cleveland, Ohio, can test full-size jet engines at air velocities of
up to 2,400mph (3860km/h) and at altitudes of up to 100,000ft (30,500m).
Benefits of the Wind Tunnel
There are many benefits that one can gain in using a wind tunnel. Since designing an
airplane is a long and complicated process and an expensive one as well. With the wind tunnel you
can build models and test them at a fraction of the price compared to making the real thing. When
designing an airplane one has to take into account the public safety and still be able to keep the
design in mind to do what it is designed to do. With a wind tunnel you can design and test what
you make before you make it.
With a wind tunnel you can also solve problems that already exist. One example of this is
when the first jet engine propelled aircraft's where produced in the 40's . The problem occurred
when the jet planes released there missiles that where on the external part of the plane, the
missiles had a tendency to move up when released causing a collision with the plane resulting in
death of the pilot .With the wind tunnel the were able to solve this problem with out the lost of
any more lives.
On February 1, 1956 wind tunnels were so important that the Army formed the ABMA at
Redstone Arsenal in Huntsville, Alabama from army missile program assets. This program was
made to support for on going research and development projects for the army ballistic missile
program in this program they made a 14inc wind tunnel to test the missiles.
Early test were done to determine the aerodynamics of the Jupiter IRBM (Intermediate
Range Ballistic Missile)and its nose cone. The Jupiter C missile was one of the first Launch
Vehicles tested in the wind tunnel. The Jupiter C was a modified Redstone rocket made of nose
cone re-entry testing. A modified Jupiter C the Juno 1, launched America's first satellite, the
explorer 1 into orbit. Soon after this the ABMA wind tunnel went to NASA. The wind tunnel
played a vital role in the exploration of space. The wind tunnel played a major role in the Saturn
V, the first rocket that put the first man on the moon(Apollo mission) to the current Space Shuttle
Launch Vehicle. The tunnel mission changed from testing medium to long range missiles to
supporting America's "Race Into Space". NASA increased the payload of the original 10lb
satellite(explorer 1 ) to a man in a capsule(project Mercury). To the Apollo Project. The Saturn
family of launch vehicles spent hundreds of hours in the wind tunnel. There were various
configurations that were tried to find the best result. At first they were going to make a fully
reusable shuttle but that idea cost to much and was ruled out due to there budget. With the
budget in mind the current space shuttle started to be formed. But it still took many years in a
wind tunnel before the final design of the Orbiter, External Tank and Solid Rocket Boosters final
took there shape as the one we know of today. Even after the space shuttle took flight they were
still being tested to increase performance. Test were done to determine the cause of tile damage.
As the shuttle program continued to progress at a rapid pace it came to a stand still when the
Challenger Accident occurred. After the accident the 14in wind tunnel was immediately put into
use. to analyze what had occurred. These test verified what happen to the SRB leak and the
rupture of the aft external tank STA 2058 ring frame. The data was used to determine the
trajectory and control reconstruction. With all of this information they got from this they are
trying to develop a way to abort scenarios involving orbiter separation during transonic flight. All
of these configuration were done to the scale model that is .004 of the real shuttle.
This is just a few applications of the wind tunnel. There are many more things that they
can do. With the invention of the wind tunnel the cost of designing an aircraft and testing an
aircraft has been reduced, And most important lives have been saved. With out the wind tunnel
there would be no way for us to know what will happen before it happens.

A Technical Analysis of Human Factors and Ergonomics in Moder

I. Introduction
Since the dawn of the aviation era, cockpit design has become increasingly complicated owing to the advent of new technologies enabling aircraft to fly farther and faster more efficiently than ever before. With greater workloads imposed on pilots as fleets modernize, the reality of he or she exceeding the workload limit has become manifest. Because of the unpredictable nature of man, this problem is impossible to eliminate completely. However, the instances of occurrence can be drastically reduced by examining the nature of man, how he operates in the cockpit, and what must be done by engineers to design a system in which man and machine are ideally interfaced. The latter point involves an in-depth analysis of system design with an emphasis on human factors, biomechanics, cockpit controls, and display systems. By analyzing these components of cockpit design, and determining which variables of each will yield the lowest errors, a system can be designed in which the Liveware-Hardware interface can promote safety and reduce mishap frequency.

II. The History Of Human Factors in Cockpit Design
The history of cockpit design can be traced as far back as the first balloon flights, where a barometer was used to measure altitude. The Wright brothers incorporated a string attached to the aircraft to indicate slips and skids (Hawkins, 241). However, the first real efforts towards human factors implementation in cockpit design began in the early 1930's. During this time, the United States Postal Service began flying aircraft in all-weather missions (Kane, 4:9). The greater reliance on instrumentation raised the question of where to put each display and control. However, not much attention was being focused on this area as engineers cared more about getting the instrument in the cockpit, than about how it would interface with the pilot (Sanders & McCormick, 739).
In the mid- to late 1930's, the development of the first gyroscopic instruments forced engineers to make their first major human factors-related decision. Rudimentary situation indicators raised concern about whether the displays should reflect the view as seen from inside the cockpit, having the horizon move behind a fixed miniature airplane, or as it would be seen from outside the aircraft. Until the end of World War I, aircraft were manufactured using both types of display. This caused confusion among pilots who were familiar with one type of display and were flying an aircraft with the other. Several safety violations were observed because of this, none of which were fatal (Fitts, 20-21).
Shortly after World War II, aircraft cockpits were standardized to the 'six-pack' configuration. This was a collection of the six critical flight instruments arranged in two rows of three directly in front of the pilot. In clockwise order from the upper left, they were the airspeed indicator, artificial horizon, altimeter, turn coordinator, heading indicator and vertical speed indicator. This arrangement of instruments provided easy transition training for pilots going from one aircraft to another. In addition, instrument scanning was enhanced, because the instruments were strategically placed so the pilot could reference each instrument against the artificial horizon in a hub and spoke method (Fitts, 26-30).
Since then, the bulk of human interfacing with cockpit development has been largely due to technological achievements. The dramatic increase in the complexity of aircraft after the dawn of the jet age brought with it a greater need than ever for automation that exceeded a simple autopilot. Human factors studies in other industries, and within the military paved the way for some of the most recent technological innovations such as the glass cockpit, Heads Up Display (HUD), and other advanced panel displays. Although these systems are on the cutting edge of technology, they too are susceptible to design problems, some of which are responsible for the incidents and accidents mentioned earlier. They will be discussed in further detail in another chapter (Hawkins, 249-54).

III. System Design
A design team should support the concept that the pilot's interface with the system, including task needs, decision needs, feedback requirements, and responsibilities, must be primary considerations for defining the system's functions and logic, as opposed to the system concept coming first and the user interface coming later, after the system's functionality is fully defined. There are numerous examples where application of human-centered design principles and processes could be better applied to improve the design process and final product. Although manufacturers utilize human factors specialists to varying degrees, they are typically brought into the design effort in limited roles or late in the process, after the operational and functional requirements have been defined (Sanders & McCormick, 727-8). When joining the design process late, the ability of the human factors specialist to influence the final design and facilitate incorporation of human-centered design principles is severely compromised. Human factors should be considered on par with other disciplines involved in the design process.
The design process can be seen as a six-step process; determining the objectives and performance specifications, defining the system, basic system design, interface design, facilitator design, and testing and evaluation of the system. This model is theoretical, and few design systems actually meet its performance objectives. Each step directly involves input from human factors data, and incorporates it in the design philosophy (Bailey, 192-5).
Determining the objectives and performance specifications includes defining a fundamental purpose of the system, and evaluating what the system must do to achieve that purpose. This also includes identifying the intended users of the system and what skills those operators will have. Fundamentally, this first step addresses a broad definition of what activity-based needs the system must address. The second step, definition of the system, determines the functions the system must do to achieve the performance specifications (unlike the broader purpose-based evaluation in the first step). Here, the human factors specialists will ensure that functions match the needs of the operator. During this step, functional flow diagrams can be drafted, but the design team must keep in mind that only general functions can be listed. More specific system characteristics are covered in step three, basic system design (Sanders & McCormick, 728-9).
The basic system design phase determines a number of variables, one of which is the allocation of functions to Liveware, Hardware, and Software. A sample allocation model considers five methods: mandatory, balance of value, utilitarian, affective and cognitive support, and dynamic. Mandatory allocation is the distribution of tasks based on limitations. There are some tasks which Liveware is incapable of handling, and likewise with Hardware. Other considerations with mandatory allocation are laws and environmental restraints. Balance of value allocation is the theory that each task is either incapable of being done by Liveware or Hardware, is better done by Liveware or Hardware, or can only be done only by Liveware or Hardware. Utilitarian allocation is based on economic restraints. With the avionics package in many commercial jets costing as much as 15% of the overall aircraft price (Hawkins, 243), it would be very easy for design teams to allocate as many tasks to the operator as possible. This, in fact, was standard practice before the advent of automation as it exists today. The antithesis to that philosophy is to automate as many tasks as possible to relieve pressure on the pilot. Affective and cognitive support allocation recognizes the unique need of the Liveware component and assigns tasks to Hardware to provide as much information and decision-making support as possible. It also takes into account limitations, such as emotions and stress which can impede Liveware performance. Finally, dynamic allocation refers to an operator-controlled process where the pilot can determine which functions should be delegated to the machine, and which he or she should control at any time. Again, this allocation model is only theoretical, and often a design process will encompass all, or sometimes none of these philosophies (Sanders & McCormick, 730-4).
Basic system design also delegates Liveware performance requirements, characteristics that the operator must posses for the system to meet design specifications (such as accuracy, speed, training, proficiency). Once that is determined, an in-depth task description and analysis is created. This phase is essential to the human factors interface, because it analyzes the nature of the task and breaks it down into every step necessary to complete that task. The steps are further broken down to determine the following criteria: stimulus required to initiate the step, decision making which must be accomplished (if any), actions required, information needed, feedback, potential sources of error and what needs to be done to accomplish successful step completion. Task analysis is the foremost method of defining the Liveware-Hardware interface. It is imperative that a cockpit be designed using a process similar to this if it is to maintain effective communication between the operator and machine (Bailey, 202-6). It is widely accepted that the equipment determines the job. Based on that assumption, operator participation in this design phase can greatly enhance job enlargement and enrichment (Sanders & McCormick, 737; Hawkins, 143-4).
Interface design, the fourth process in the design model, analyzes the interfaces between all components of the SHEL model, with an emphasis on the human factors role in gathering and interpreting data. During this stage, evaluations are made of suggested designs, human factors data is gathered (such as statistical data on body dimensions), and any gathered data is applied. Any application of data goes through a sub-process that determines the data's practical significance, its interface with the environment, the risks of implementation, and any give and take involved. The last item involved in this phase is conducting Liveware performance studies to determine the capabilities and limitations of that component in the suggested design. The fifth step in the design stage is facilitator design. Facilitators are basically Software designs that enhance the Liveware-Hardware, such as operating manuals, placards, and graphs. Finally, the last design step is to conduct testing of the proposed design and evaluate the human factors input and interfaces between all components involved. An application of this process to each system design will enhance the operators ability to control the system within desired specifications. Some of the specific design characteristics can be found in subsequent chapters.

IV. Biomechanics
In December of 1981, a Piper Comanche aircraft temporarily lost directional control in gusty conditions within the performance specifications of the aircraft. The pilot later reported that with the control column full aft, he was unable to maintain adequate aileron control because his knees were interfering with proper control movement (NTSB database). Although this is a small incident, it should alert engineers to a potential problem area. Probably the most fundamental, and easiest to quantify interface in the cockpit is the physical dimensions of the Liveware component and the Hardware designs which must accommodate it. The comfort of the workspace has long been known to alleviate or perpetuate fatigue over long periods of time (Hawkins, 282-3). These facts indicate a need to discuss the factors involved in workspace design.
When designing a cockpit, the engineer should determine the physical dimensions of the operator. Given the variable dimensions of the human body, it is naturally impossible to design a system that will accommodate all users. An industry standard is to use 95% of the population's average dimensions, by discarding the top and bottom 2.5% in any data. From this, general design can be accomplished by incorporating the reach and strength limitations of smaller people, and the clearance limitations of larger people. Three basic design philosophies must be adhered to when designing around physical dimensions: reach and clearance envelopes, user position with respect to the display area, and the position of the body (Bailey, 273).
Other differences must be taken into account when designing a system, such as ethnic and gender differences. It is known, for example, that women are, on average, 7% shorter than men (Pheasant, 44). If the 95 percentile convention is used, the question arises, on which gender do we base that? One was to speak of the comparison is to discuss the F/M ratio, or the average female characteristic divided by the average male characteristic. Although this ratio doesn't take into account the possibility of overlap (i.e., the bottom 5th percentile of males are likely to be shorter than the top 5th percentile of females), that is not an issue in cockpit design (Pheasant, 44). The other variable, ethnicity must also be evaluated in system design. Some Asian races, for example have a sitting height almost ten centimeters lower than Europeans (Pheasant, 50). This can raise a potential problem when designing an instrument panel, or windshield.
Some design guides have been established to help the engineer with conceptual problems such as these, but for the most part, systems designers are limited to data gathered from human factors research (Tillman & Tillman, 80-7). As one story went, during the final design phase of the Boeing 777, the chairman of United Airlines was invited to preview it. When he stood in his first class seat, his head collided with an overhead baggage rack. Boeing officials were apologetic, but the engineers were grinning inside. A few months later, the launch of the first 777 in service included overhead baggage racks that were much higher, and less likely to be involved in a collision. Unlike this experience, designing clearances and reach envelopes for a cockpit is too expensive to be a trial and error venture.

V. Controls
In early 1974, the NTSB released a recomendation to the FAA regarding control inconsistencies:



"A-74-39. Amend 14 cfr 23 to include specifications for standardizing fuel selection valve handle designs, displays, and modes of operation" (NTSB database).

A series of safety accidents occurred during transition training of pilots moving from the Beechcraft Bonanza and Baron aircraft when flap and gear handles were mistakenly confused:

"As part of a recently completed special investigation, the safety board reviewed its files for every inadvertent landing gear retraction accident between 1975 and 1978. These accidents typically happened because the pilot was attempting to put the flaps control up after landing, and moved the landing gear control instead. This inadvertent movement of the landing gear control was often attributed to the pilot's being under stress or distracted, and being more accustomed to flying aircraft in which these two controls were in exactly opposite locations. Two popular light aircraft, the Beech Bonanza and Baron, were involved in the majority of these accidents. The bonanza constituted only about 30 percent of the active light single engine aircraft fleet retractable landing gear, but was involved in 16 of the 24 accidents suffered by this category of aircraft. Similarly, the baron constituted only 16 percent of the light twin fleet, yet suffered 21 of the 39 such accidents occurring to these aircraft" (NTSB database).

Like biomechanics, the design of controls is the study of physical relationships within the Liveware-Hardware interface. However, control design philosophy tends to be more subtle, and there is slightly more emphasis on psychological components. A designer determines what kind of control to use in a system only after the purpose of the system has been established, and what operator needs and limitations are.
In general, controls serve one of four actions: activation, discrete setting, quantitative setting, and continuous control. Activation controls are those that toggle a system on or off, like a light switch. Discrete setting switches are variable position switches with three or more options, such as a fuel selector switch with three settings. Quantitative setting switches are usually knobs that control a system along a predefined quantitative dimension, such as a radio tuner or volume control. Continuous controls are controls that require constant equipment control, such as a steering wheel. A control is a system, and therefore follows the same guidelines for system design described above. In general, there are a few guidelines to control design that are unique to that system. Controls should be easily identified by color coding, labeling, size and shape coding and location (Bailey, 258-64).
When designing controls, some general principles apply. Normal requirements for control operation should not exceed the maximum limitations of the least capable operator. More important controls should be given placement priority. The neutral position of the controls should correspond with the operator's most comfortable position, and full control deflection should not require an extreme body position (locked legs, or arms). The controls should be designed within the most biomechanically efficient design. The number of controls should be kept to a minimum to reduce workload, or when that is not possible, combining activation controls into discrete controls is preferable. When designing a system, it should be noted that foot control is stronger, but less accurate than hand control. Continuous control operation should be distributed around the body, instead of focused on one particular part, and should be kept as short as possible (Damon, 291-2).
Detailed studies have been conducted about control design, and some concerns were such things as the ability of an operator to discern one control with another, size and spacing of controls, and stereotypes. It was stated that even with vision available, easily discernible controls were mistaken for another (Fitts, 898; Adams, 276). A study by Jenkins revealed a set of control knobs that were not prone to such error, or were less likely to yield errors (Adams, 276-7). Some of these have been incorporated in aircraft designs as recent as the Boeing 777. Another study, conducted by Bradley in 1969 revealed that size and spacing of knobs was directly related to inadvertent operation. He believed that if a knob were too large, small, far apart, or close together, the operator was prone to a greater error yield. In the study, Bradley concluded that the optimum spacing between half-inch knobs would be one inch between their edges. This would yield the lowest inadvertent knob operation (Fitts, 901-2; Adams, 278). Population stereotypes address the issue of how a control should be operated (should a light switch be moved up, to the left, to the right, or down to turn it on?). There are four advantages that follow a model of ideal control relationship. They are decreased reaction time, fewer errors, better speed of knob adjustment, and faster learning. (Van Cott & Kinkdale, 349). These operational advantages become a great source of error to the operator unfamiliar with the aircraft and experiencing stress. During a time of high workload, one characteristic of the Liveware component is to revert to what was first learned (Adams, 279-80). In the case of the Bonanza and Baron pilots, this was the case in mistaking the gear and flap switches.

VI. Displays
In late 1986, the NTSB released the following recommendation to the FAA based on three accidents that had occurred within the preceding two years:

"A-86-105. Issue an Air Carrier Operations Bulletin-Part 135, directing Principal Operations Inspectors to ensure that commuter air carrier training programs specifically emphasize the differences existing in cockpit instrumentation and equipment in the fleet of their commuter operators and that these training programs cover the human engineering aspects of these differences and the human performance problems associated with these differences" (NTSB database).

The instrumentation in a cockpit environment provides the only source of feedback to the pilot in instrument flying conditions. Therefore, it is a very valuable design characteristic, and special attention must be paid to optimum engineering. There are two basic kinds of instruments that accomplish this task: symbolic and pictorial instruments. All instruments are coded representations of what can be found in the real world, but some are more abstract than others. Symbolic instrumentation is usually more abstract than pictorial (Adams, 195-6). When designing a cockpit, the first consideration involves the choice between these two types of instruments. This decision is based directly on the operational requirements of the system, and the purpose of the system. Once this has been determined, the next step is to decide what sort of data is going to be displayed by the system, and choose a specific instrument accordingly.
Symbolic instrumentation usually displays a combination of four types of information: quantitative, qualitative, comparison, and reading checking (Adams, 197). Quantitative instruments display the numerical value of a variable, and is best displayed using counters, or dials with a low degree of curvature. The preferable orientation of a straight dial would be horizontal, similar to the heading indicator found in glass cockpits. However, conflicting research has shown that no loss accuracy could be noted with high curvature dials (Murrell, 162). Another experiment showed that moving index displays with a fixed pointer are more accurate than a moving pointer on a fixed index (Adams, 200-1). Qualitative readings is the judgment of approximate values, trends, directions, or rate of variable change. This information is displayed when a high level of accuracy is not required for successful task completion (Adams, 197). A study conducted by Grether and Connell in 1948 suggested that vertical straight dials are superior to circular dials because an increase in needle deflection will always indicate a positive change. However, conflicting arguments came from studies conducted a few years later that stated no ambiguity will manifest provided no control inputs are made if a circular dial is used. It has also been suggested that moving pointers along a fixed background are superior to fixed pointers, but the few errors in reading a directional gyro seem to disagree with this supposition (Murrell, 163). Comparisons of two readings are best shown on circular dials with no markings, but if they are necessary, the markings should not be closer than 10 degrees to each other (Murrell, 163). Check reading involves verifying if a change has occurred from the desired value (Adams, 197). The most efficient instrumentation for this kind of task are any with a moving pointer. However, the studies concerning this type of informational display has only been conducted with a single instrument. It is not known if this is the most efficient instrument type when the operator is involved in a quick scan (Murrell, 163-4).
The pictorial instrument is most efficiently used in situation displays, such as the attitude indicator or air traffic control radar. In one experiment, pilots were allowed to use various kinds of situation instruments to tackle a navigational problem. Their performance was recorded, and the procedure was repeated using different pilots with only symbolic instruments. Interestingly, the pilots given the pictorial instrumentation performed no navigation errors, whereas those given the symbolic displays made errors almost ten percent of the time (Adams, 208-209). Regardless of these results, it has long been known that the most efficient navigational methods are accomplished by combining the advantages of these two types of instruments.

VII. Summary
The preceding chapters illustrate design-side techniques that can be incorporated by engineers to reduce the occurrence of mishaps due to Liveware-Hardware interface problems. The system design model presented is ideal and theoretical. To practice it would cost corporations much more money than they would save if they were to use less cost-efficient methods. However, today's society seems to be moving towards a global concensus to take safety more seriously, and perhaps in the future, total human factors optimization will become manifest. The discussion of biomechanics in chapter three was purposely broad, because it is such a wide and diverse field. The concepts touched upon indicate the areas of concern that a designer must address before creating a cockpit that is ergonomically friendly in the physical sense. Controls and displays hold a little more relevance, because they are the fundamental control and feedback devices involved in controlling the aircraft. These were discussed in greater detail because many of those concepts never reach the conscious mind of the operator. Although awareness of these factors is not critical to safe aircraft operation, they do play a vital role in the subconscious mind of the pilot during critical operational phases under high stress. Because of the unpredictable nature of man, it would be foolish to assume a zero tolerance environment to potential errors like these, but further investigation into the design process, biomechanics, control and display devices may yield greater insight as far as causal factors is concerned. Armed with this knowledge, engineers can set out to build aircraft not only to transport people and material, but also to save lives.

Calculus

"One of the greatest contributions to modern mathematics, science, and engineering was
the invention of calculus near the end of the 17th century," says The New Book of Popular
Science. Without the invention of calculus, many technological accomplishments, such as the
landing on the moon, would have been difficult.
The word "calculus" originated from the Latin word meaning pebble. This is probably
because people many years ago used pebbles to count and do arithmetic problems.
The two people with an enormous contribution to the discovery of the theorems of
calculus were Sir Isaac Newton of England and Baron Gottfried Wilhelm of Germany. They
discovered these theorems during the 17th century within a few years of each other.
Isaac Newton was considered one of the great physicists all time. He applied calculus to
his theories of motion and gravitational pull. He was able to discover a function and
describe mathematically the motion of all objects in the universe.
Calculus was invented to help solve problems dealing with "changing or
varying" quantities. Calculus is considered "mathematics of change."
There are some basic or general parts of calculus. Some of these are functions,
derivative, antiderivatives, sequences, integral functions, and multivariate calculus.
Some believe that calculus is too hard or impossible to learn without much memorization
but if you think that calculus is all memorizing then you will not get the object of learning


calculus. People say that calculus is just the revision or expansion of old or basic equations and I
believe that also.
In economics and business there are some uses for calculus. One important application of
integral calculus in business is the evaluation of the area under a function. This can be used
in a probability model. Probability is another uses in integral calculus for business because you
could find how often something will appear in a certain range in a certain time. A function used
for probability in uniform distribution. The function is f(x) = 1 \ (b - a) for a < x >= b. Some
economics uses is figuring marginal and total cost. The function is TC = {MC = TVC + FC.
Another is the demand on a sales product. ex. Demand on Beer that brings in different variables
to see how the consumption of beer is. The function is a multivariate function f(m, p, r, s) =
(1.058)(m^.136)(p^-.727)(r^.914)(s^.816) where
m = aggregate real income : p = average retail price of beer
r = average retail price level of all other consumer goods
s = measure of strength of beer (how consumers like it)
as you can see if everything but r stays constant then the demand will go up.
Some terms used in calculus frequently used to learn you need to know what they are.
Derivative is the fundamental concept of calculus that is how things change. (ex. instantaneous
velocity) Functions are always used in all applications. A function is an equation with one or
more variables where only one x value will produce only one y value is a function. Also you
will need to learn and memorize some theorems and identities to be able to expand and breakdown
equations.

Pascal's triangle

Blaise Pascal was born at Clermont, Auvergne, France on June 19, 1628. He was the son of ร‰tienne Pascal, his father, and Antoinette Bรฉgone, his mother who died when Blaise was only four years old. After her death, his only family was his father and his two sisters, Gilberte, and Jacqueline, both of whom played key roles in Pascal's life. When Blaise was seven he moved from Clermont with his father and sisters to Paris. It was at this time that his father began to school his son. Though being strong intellectually, Blaise had a pathetic physique.
Things went quite well at first for Blaise concerning his schooling. His father was amazed at the ease his son was able to absorb the classical education thrown at him and "tried to hold the boy down to a reasonable pace to avoid injuring his health." (P 74,Bell) Blaise was exposed to all subjects, all except mathematics, which was taboo. His father forbid this from him in the belief that Blaise was strain his mind. Faced with this opposition, Blaise demanded to know 'what was mathematics?' His father told him, "that generally speaking, it was the way of making precise figures and finding the proportions among them." (P 39,Cole) This set him going and during his play times in this room he figured out ways to draw geometric figures such as perfect circles, and equilateral triangles, all of this he accomplished. Due to the fact that ร‰tienne took such painstaking measures to hide mathematics from Blaise, to the point where he told his friends not to mention math at all around him, Blaise did not know the names to these figures. So he created his own vocab for them, calling a circle a "round" and lines he named "bars". "After these definitions he made himself axioms, and finally made perfect demonstrations." (P 39,Cole) His progression was far enough that he reached the 32nd proposition of Euclid's Book one. Deeply enthralled in this task his father entered the room un-noticed only to observe his son, inventing mathematics. At the age of 13 ร‰tienne began taking Blaise to meetings of mathematicians and scientists which gave Blaise the opportunity to meet with such minds as Descartes and Hobbes. Three years later at the age of 16 Blaise amazed his peers by submitting a paper on conic sections. His sister was quoted as having said "that it was considered so great an intellectual achievement that people have said they have seen nothing as mighty since the time of Archimedes." (I:Pascal) This was his first real contribution to mathematics, but not his last.
Note: www.nd.edu/StudentLinks/akoehl/Pascal.html
Pascal's contributions to mathematics from then on were surmasing. From a young age he was 'creating science.' His first scientific work, an essay on sounds he prepared at a very young age. Once at a dinner party someone tapped a glass with a spoon. Pascal went about the house tapping the china with his fork then dissappeard into his room only to emerge hours later having completed a short essay on sound. He used the same approach to all of the problems he encountered; working at them until he was satisfied with his understanding of the problem at hand. A few of his disocoveries stood out more then others, of them his calculating machine,
and his contributions to combinatorial analysis have made a signifigant contribution to mathematics.
The mechanical calculator was devised by Pascal in 1642 and was brought to a commercial version in 1645. It was one of the earliest in the history of computing. 'Side by side in an oblong box were places six small drums, round the upper and lower halves chich the numbers 0 to 9 were written, in decending and ascending orders respectively. According to whichever aritchmatical process was currently in use, one half of each drum was shut off from outside view by a sliding metal bar: the upper row of figures was for subtraction, the lower for addition. Below each drum was a wheel consisting of ten (or twenty of twelve) movable spokes inside a fixed rim numbered in ten (or more) equal sections from 0 to 9 etc, rather like a clockface. Wheels and rims were all visible on the box lid, and indeed the numbers to be added or subtracted were fed into the machine by means of the wheels: 4 for instance, being recorded by using a small pin to turn the stoke opposite division 4 as far as a catch positioned close to the outer edge of the box. The procedure for basic arithmatical process then as follows.
To add 315+172, first 315 was recorded on the three (out of six) drums closest to the right-hand side: 5 would appear in the sighting aperture to the extremem right, 1 next to it, and 3 next to that again. To increase by one the number showing in any aperture, it was necessary to turn the appropriate frum forward 1/10th of a revolution. Tus in this sum, the drum on the extremem right of the machine would be given two turns, the drum immediately to its left would be moved on 7/10ths of a revolution, whilst the drum to its immediate left would be rotated forward by 1/10th. Tht total of 487 could then be read off in the appropriate slots. But, easy as thes operation was, a problem clearly arose when the numbers to be added together involved totals needing to be carried forward: say 315 + 186. At the perios at which Pascal was working, and because there had been no previous attempt at a calculating-machine capable of carrying column totals forward, this presened a serious technical challenge.(adamson p 23)


Pascal is also accredited with the advent of Pascal's triangle; An arrangement of numbers which were originally discovered by the chinese but named after Pascal due to his furthur discoveries into the properties which it possesed.
ex. (Pascals Triangle) 1
1 1
1 2 1
1 3 3 1
.
.
.

'Pascal investigated binomial coefficients and laid the foundations of the binomial
theorem.'(adamson p37) 'A triangular array of numbers consists of ones written on the vertical leg and on the hypotenuse of a right angled isosceled triangle; each other element composing the triangle is the sum of the element directly above it and of the element above it and to the left. Pascal proceeded from this to demonstrate that the numbers in the (n+1)st row are the coeffieients in the binomial expansion of (x+y)n. Due to the ease and clarity of the formation of the problems involved, Pascal's triangle, although not original was one of his finest achievements. It has greatly influenced mandy discoveries including the theoritical basis of the computer). It has also made an essential contribution to the field of combinatory analysis. It also 'through the work of John Wallis it led Isaac Newton to the discovery of the binomial theorem for fractional and negative indices, and it was central to Leibniz's discovery of the calculus.'(adamson p37)
As stated looking closer at the triangle Pascal was able to deduce many properties. First of all, the enteries in any row of the triangle are an equal distance from each other.
He found another property can be derived from the triangle. He discovered that any number in the triangle is the sum of the two numbers directly above it. This hls true for both triangles, the solved and unsolved. (3/1) + (3/2) = (4/2). Similarly, (5/1) + (5/2) = (6/2). The generalization of this property is known as Pascal's theorem.
Furthur studies in hydrodynamics, hydrostatic and atmospheric pressure led Pascal to many dicoveries still in use today such as the syringe and hydrolic press. Both these inventions came after years of him experimenting with vacuum tubes. One such experiment was to 'Take a tube which is curved at its bottom end, sealed at its top end A and open its extermity B. Another tube, a completely straight one open at both extermities M and N, is joined into the curved end of the first tube by its extermity M. Seal B, the opening of the curved end of the first tube, either with your finger or in some other manner and turn the entire apparatus upside down so that, in other words, the two tubes really only consist of one tube, being interconnected. Fill this tube with quicksilver and turn it the right way up again so that A is at the top; then place the end N in a dishfull of quicksilver. The whole of the quicksilver in the upper tube will fall down, with the result that it will all recede into the curve unless by any chance part of it also flows through the aperture M into the tube below. But the quicksilver in the lover tube will only partially subside as part of it will also remain suspended at a heright of 26'-27' according to the place and weather conditions in which the experiment is being carried out.
The reason for this difference is because the air weights down on the quicksilver in the dish beneath the lower tube, and thus the quicksilver which is inside that tube is held suspened in balence.
But it does not weigh down upon the quicksilver at the curved end of the upper tube, for the finger or bladder sealing this prevents any access to it, so that, as no air is pressing down at this point, the quicksilver in the upper tube drops freely because there is nothing to hold it up or to resist its fall.
All of these contibutions have made a lasting impact of all of mankind. Everything that Pascal created is still in use today in someway or another. His primative form of a syringe is still used in the medical field today to administer drugs and remove blood. The work he did on combinatory mathematics can be applied by anyone to 'figure out the odds' concerning a situation, which is exactly how he used it; by going to casinos and playing games smart. Something that anyone can do today. The work he did concerning hydrolic pressses are still in use today in factories, and car garages.

Blaise Pascal

Blaise Pascal was born in Clermont France on June 19, 1623, and

died in Paris on Aug. 19, 1662. His father, a local judge at Clermont, and

also a man with a scientific reputation, moved the family to Paris in 1631,

partly to presue his own scientific studies, partly to carry on the education of

his only son, who had already displayed exceptional ability. Blaise was kept

at home in order to ensure his not being overworked, and it was directed

that his education should be at first confined to the study of languages, and

should not include any mathematics. Young Pascal was very curious, one

day at the age of twelve while studying with his tutor, he asked about the

study of geometry. After this he began to give up his play time to persue the

study of geometry. After only a few weeks he had mastered many properties

of figures, in particular the proposition that the sum of the angles of a

triangle is equal to two right angles. His father noticed his sons ability in

mathematics and gave him a copy of Euclids's Elements, a book which

Pascal read and soon mastered. At the young age of fourteen he was

admitted to the weekly meetings of Roberval, Mersenne, Mydorge, and



other French geometricians. At the age of sixteen he wrote an essay on

conic sections; and in 1641 at the age of 18 he construced the first

arithmetical machine, an instrument with metal dials on the front on which

the numbers were entered. Once the entries had been completed the answer

would be displayed in small windows on the top of the device. This device

was improved eight years later. His correspondence with Fermat about this

time shows that he was then thurning his attention to analytical geometry

and physics. At this time he repeated Torricelli's experiments, by which the

pressure of the atmosphere could be estimated as a weight, and he

confirmed his theory of the cause of barometrical variations by obtaining at

the same instant readings at different altitudes on the hill of Puy-de-Dรดme.

A strange thing about Pascal was that in 1650 he stoped all he reasearched

and his favorite studies to being the study of religion, or as he sais in his

Pensees, "contemplate the greatness and the misery of man." Also about this

time he encouraged the younger of his two sisters to enther the Port Royal

society. In 1653 after the death of his father he returned to his old studies

again, and made several experiments on the pressure exerted by gases and

liquids; it wasalso about this period that he invented the arithmetical

triangle, and together with Fermat created the calculus of probabilities. At

this time he was thinking about getting married but an accident caused him

to return to his religious life.While he was driving a four horse carrige the

two lead horses ran off the bridge. The only thing that saved him was the

traces breaking. Always somewhat of a mystic, he considered this a special

summons to abandon the world of science and return to his studies of

religion. He wrote an account of the accident on a small piece of paper,

which for the rest of his life he wore next to his heart, to remind him of his

covenant. Shortly after the accident he moved to Port Royal, where he

continued to live until his death in 1662. Besides the arithmetical machine

and Pascals Theorem, Pascal also made the Arithmetical Triangle in 1653

and his work on the theory of probabilities in 1654.

Apollonius of Perga

Apollonius was a great mathematician, known by his contempories as " The Great

Geometer, " whose treatise Conics is one of the greatest scientific works from the ancient world.
Most of his other treatise were lost, although their titles and a general indication of their contents
were passed on by later writers, especially Pappus of Alexandria.

As a youth Apollonius studied in Alexandria ( under the pupils of Euclid, according to

Pappus ) and subsequently taught at the university there. He visited Pergamum, capital of a

Hellenistic kingdom in western Anatolia, where a university and library similar to those in

Alexandria had recently been built. While at Pergamum he met Eudemus and Attaluus, and he

wrote the first edition of Conics. He addressed the prefaces of the first three books of the final

edition to Eudemus and the remaining volumes to Attalus, whom some scholars identify as King
Attalus I of Pergamum.

It is clear from Apollonius' allusion to Euclid, Conon of Samos, and Nicoteles of Cyrene

that he made the fullest use of his predecessors' works. Book 1-4 contain a systematic account

of the essential principles of conics, which for the most part had been previously set forth by

Euclid, Aristaeus and Menaechmus. A number of theorems in Book 3 and the greater part of

Book 4 are new, however, and he introduced the terms parabola, eelipse, and hyperbola. Books

5-7 are clearly original. His genius takes its highest flight in Book 5, in which he considers

normals as minimum and maximum straight lines drawn from given points to the curve

( independently of tangent properties ), discusses how many normals can be drawn from

particular points, finds their feet by construction, and gives propositions determining the center

of curvature at any points and leading at once to the Cartesian equation of the evolute of any



conic.

The first four books of the Conics survive in the original Grrek and the next three in

Arabic translation. Book 8 is lost. The only other extant work of Apollonius is Cutting Off of a

Ratio ( or On Proportional Section ), in an Arabic translation. Pappus mentions five additional

works, Cutting off an Area ( or On Spatial Section ) , On Determinate Section, Tangencies, and

Plane Loci.

Tangencies embraced the following general problem : given three things, each of which

may be a point, straight line, or circle, construct a circle tangent to the three. Sometimes known

as the problem of Apollonius, the most difficult case arises when the three given things are

circles.

Of the other works of Apollonius referred to by ancient writers, one, On the Burning

Mirror, concerned optics. Apollonius demonstrated that parallel light rays striking a spherical

mirror would not be reflected to the center of sphericity, as was previously believed. The focal

properties of the parabolic mirror were also discussed. A work on entitled On the Cylindrical

Helix is mentioned by Proclus. Apollonius also wrote Comparison of the Dodecahedron and the

Icosahedron, considering the case in which they are inscribed in the same sphere. According to








Eutocius, in Apollonius' work Quick Delivery, closer limits for the value of Pi than the

3 1/7 and 3 10/71 of Archimedes were calculated. In a work of unknown title Apollonius

developed his system of tetrads, a method for expressing and multiplying large numbers. His On
Unordered Irrationals extended the theory of irrationals originally advanced by Eudoxus of

Cnidus and found in Book 10 of Euclid's Elements.

Lastly, from references in Ptolemy's Almagest, it is known that Apollonius introduced the
systems of eccentric and epicyclic motion to explain planetary motion. Of particular interest

was his determination of the points where a planet appears stationary.

Ancient Advances in Mathematics


Ancient knowledge of the sciences was often wrong and wholly unsatisfactory by modern standards.  However not all of the knowledge of  the more learned peoples of the past was false.  In fact without people like Euclid or Plato we may not have been as advanced in this age as we are.  Mathematics is an adventure in ideas.  Within the history of mathematics, one finds the ideas and lives of some of the most brilliant people in the history of mankind's' populace upon Earth. 
      First man created a number system of base 10.  Certainly, it is not just coincidence that man just so happens to have ten fingers or ten toes, for when our primitive ancestors first discovered the need to count they definitely would have used their fingers to help them along just like a child today.  When primitive man learned to count up to ten he somehow differentiated himself from other animals.  As an object of a higher thinking, man invented ten number-sounds.  The needs and possessions of primitive man were not many.  When the need to count over ten aroused, he simply combined the number-sounds related with his fingers.  So, if he wished to define one more than ten, he simply said one-ten.  Thus our word eleven is simply a modern form of the Teutonic ein-lifon.  Since those first sounds were created, man has only added five new basic number-sounds to the ten primary ones.  They are "hundred,"  "thousand,"  "million," "billion" (a thousand millions in America, a million millions in England), "trillion" (a million millions in America, a million-million millions in England).  Because primitive man invented the same number of number-sounds as he had fingers, our number system is a decimal one, or a scale based on ten, consisting of limitless repetitions of the first ten number sounds.
      Undoubtedly, if nature had given man thirteen fingers instead of ten, our number system would be much changed.  For instance, with a base thirteen number system we would call fifteen, two-thirteen's.
While some intelligent and well-schooled scholars might argue whether or not base ten is the most adequate number system, base ten is the irreversible favorite among all the nations.
      Of course, primitive man most certainly did not realize the concept of  the number system he had just created.  Man simply used the number-sounds loosely as adjectives.  So an amount of ten fish was ten fish, whereas ten is an adjective describing the noun fish. 
      Soon the need to keep tally on one's counting raised. The simple solution was to make a vertical mark.  Thus, on many caves we see a number of marks that the resident used to keep track of his possessions such a fish or knives.  This way of  record keeping is still taught today in our schools under the name of tally marks.

      The earliest continuous record of mathematical activity is from the second millennium BC  When one of the few wonders of the world were created mathematics was necessary.  Even the earliest Egyptian pyramid proved that the makers had a fundamental knowledge of geometry and surveying skills.  The approximate time period was 2900 BC
      The first proof of mathematical activity in written form came about one thousand years later.  The best known sources of ancient Egyptian mathematics in written format are the Rhind Papyrus and the Moscow Papyrus.  The sources provide undeniable proof that the later Egyptians had intermediate knowledge of the following mathematical problems:  applications to surveying, salary distribution, calculation of area of simple geometric figures' surfaces and volumes, simple solutions for first and second degree equations.


      Egyptians used a base ten number system most likely because of biologic reasons (ten fingers as explained above).  They used the Natural Numbers (1,2,3,4,5,6, etc.) also known as the counting numbers.  The word digit, which is Latin for finger, is also another name for numbers which explains the influence of fingers upon numbers once again.
      The Egyptians produced a more complex system then the tally system for recording amounts.  Hieroglyphs stood for groups of tens, hundreds, and thousands.  The higher powers of ten made it much easier for the Egyptians to calculate into numbers as large as one million.  Our number system which is both decimal and positional (52 is not the same value as 25) differed from the Egyptian which was additive, but not positional.
      The Egyptians also knew more of pi then its mere existence.  They found pi to equal C/D or 4(8/9)ยช  whereas a equals 2.  The method for ancient peoples arriving at this numerical equation was fairly easy. They simply counted how many times a string that fit the circumference of the circle fitted into the diameter, thus the rough approximation of 3.
      The biblical value of pi can be found in the Old Testament (I Kings vii.23 and 2 Chronicles iv.2)in the following verse
"Also, he made a molten sea of ten cubits from
brim to brim, round in compass, and five cubits
      the height thereof; and a line of thirty cubits did  
                        compass it round about."
      The molten sea, as we are told is round, and measures thirty cubits round about (in circumference) and ten cubits from brim to brim (in diameter).  Thus the biblical value for pi is 30/10 = 3.

      Now we travel to ancient Mesopotamia, home of the early Babylonians.  Unlike the Egyptians, the Babylonians developed a flexible technique for dealing with fractions.  The Babylonians also succeeded in developing  more sophisticated base ten arithmetic that
were positional and they also stored mathematical records on clay tablets. 
      Despite all this, the greatest and most remarkable feature of Babylonian Mathematics was their complex usage of a sexagesimal  place-valued system in addition a decimal system much like our own modern one.  The Babylonians counted in both groups of ten and sixty.  Because of the flexibility of a sexagismal system with fractions, the Babylonians were strong in both algebra and number theory.  Remaining clay tablets from the Babylonian records show solutions to first, second, and third degree equations.
Also the calculations of compound interest, squares and square roots were apparent in the tablets.
      The sexagismal system of the Babylonians is still commonly in usage today.  Our system for telling time revolves around a sexagesimal system.  The same system for telling time that is used today was also used by the Babylonians.  Also, we use base sixty with circles (360 degrees to a circle).
      Usage of the sexagesimal system was principally for economic reasons.  Being, the main units of weight and money were mina,(60 shekels) and talent (60 mina).  This sexagesimal arithmetic was used in commerce and in astronomy.
      The Babylonians used many of the more common cases of the Pythagorean Theorem for right triangles.  They also used accurate formulas for solving the areas, volumes and other measurements of the easier geometric shapes as well as trapezoids.  The Babylonian value for pi was a very rounded off three.  Because of this crude approximation of pi, the Babylonians achieved only rough estimates of the areas of circles and other spherical, geometric objects.

      The real birth of modern math was in the era of Greece and Rome.  Not only did the philosophers ask the question "how" of previous cultures, but they also asked the modern question of "why."  The goal of this new thinking was to discover and understand the reason for mans' existence in the universe and also to find his place.  The philosophers of Greece used mathematical formulas to prove propositions of mathematical properties.  Some of who, like Aristotle, engaged in the theoretical study of logic and the analysis of correct reasoning.  Up until this point in time, no previous culture had dealt with the negated abstract side of mathematics, of with the concept of the mathematical proof.
      The Greeks were interested not only in the application of mathematics but also in its philosophical significance, which was especially appreciated by Plato (429-348 BC).  Plato was of the richer class of gentlemen of leisure.  He, like others of his class, looked down upon the work of slaves and craftsworker.  He sought relief, for the tiresome worries of life, in the study of philosophy and personal ethics.  Within the walls of Plato's academy at least three great mathematicians were taught, Theaetetus, known for the theory of irrational, Eodoxus, the theory of proportions, and also Archytas (I couldn't find what made him great, but three books mentioned him so I will too).  Indeed the motto of Plato's academy "Let no one ignorant of geometry enter within these walls" was fitting for the scene of the great minds who gathered here.
      Another great mathematician of the Greeks was Pythagoras who provided one of the first mathematical proofs and discovered incommensurable magnitudes, or irrational numbers.  The Pythagorean theorem relates the sides of a right triangle with their corresponding squares.  The  discovery of irrational magnitudes had another consequence for the Greeks:  since the length of diagonals of squares could not be expressed by rational numbers in the form of  A over B, the Greek number system was inadequate for describing them.
      As, you might have realized, without the great minds of the past our mathematical experiences would be quite different from the way they are today.  Yet as some famous (or maybe infamous) person must of once said "From down here the only way is up,"  so you might say that from now, 1996, the future of mathematics can only improve for the better.