Monday, September 19, 2011
DEATH
The nature of death has been for millennia a central concern of the world's religious traditions and of philosophical enquiry, and belief in some kind of afterlife or rebirth has been a central aspect of religious faith.
Contents
[hide]
1 Etymology
2 Senescence
3 Symptoms of death
4 Diagnosis
4.1 Problems of definition
4.2 Legal
4.3 Misdiagnosed
5 Causes
5.1 Autopsy
6 Life extension
7 Location
8 Society and culture
9 In biology
9.1 Natural selection
9.2 Extinction
9.3 Evolution of aging
10 See also
11 References
12 Further reading
13 External links
Etymology
The word death comes from Old English deað, which in turn comes from Proto-Germanic *dauþaz (reconstructed by etymological analysis). This comes from the Proto-Indo-European stem *dheu- meaning the 'Process, act, condition of dying'.
Dauþaz was reconstructed through the use of the daughter tongues of Proto-Germanic, such as doth from Old Saxon, dath from Old Frisian, dood from Dutch, tod from Old High German, dauði from Old Norse and modern-day Icelandic, död from Swedish, and dauþas from Gothic.[1]
Senescence
Almost all animals fortunate enough to survive hazards to their existence eventually die from senescence. The only known exception is the jellyfish Turritopsis nutricula, thought to be, in effect, immortal.[2] Causes of death in humans as a result of intentional activity include suicide and homicide. From all causes, roughly 150,000 people die around the world each day.[3]
Physiological death is now seen as a process, more than an event: conditions once considered indicative of death are now reversible.[4] Where in the process a dividing line is drawn between life and death depends on factors beyond the presence or absence of vital signs. In general, clinical death is neither necessary nor sufficient for a determination of legal death. A patient with working heart and lungs determined to be brain dead can be pronounced legally dead without clinical death occurring. Precise medical definition of death, in other words, becomes more problematic, paradoxically, as scientific knowledge and medicine advance.[5]
Symptoms of death
Signs of death or strong indications that a person is no longer alive are:
Cessation of breathing
Cardiac arrest (No pulse)
Pallor mortis, paleness which happens in the 15–120 minutes after death
Livor mortis, a settling of the blood in the lower (dependent) portion of the body
Algor mortis, the reduction in body temperature following death. This is generally a steady decline until matching ambient temperature
Rigor mortis, the limbs of the corpse become stiff (Latin rigor) and difficult to move or manipulate
Decomposition, the reduction into simpler forms of matter, accompanied by a strong, unpleasant odor.
Diagnosis
Problems of definition
A flower, a skull and an hourglass stand in for Life, Death and Time in this 17th-century painting by Philippe de Champaigne
The concept of death is a key to human understanding of the phenomenon.[6] There are many scientific approaches to the concept. For example, brain death, as practiced in medical science, defines death as a point in time at which brain activity ceases.[7][8][9][10] One of the challenges in defining death is in distinguishing it from life. As a point in time, death would seem to refer to the moment at which life ends. However, determining when death has occurred requires drawing precise conceptual boundaries between life and death. This is problematic because there is little consensus over how to define life. It is possible to define life in terms of consciousness. When consciousness ceases, a living organism can be said to have died. One of the notable flaws in this approach, however, is that there are many organisms which are alive but probably not conscious (for example, single-celled organisms). Another problem with this approach is in defining consciousness, which has many different definitions given by modern scientists, psychologists and philosophers. This general problem of defining death applies to the particular challenge of defining death in the context of medicine.
Other definitions for death focus on the character of cessation of something.[11] In this context "death" describes merely the state where something has ceased, for example, life. Thus, the definition of "life" simultaneously defines death.
Historically, attempts to define the exact moment of a human's death have been problematic. Death was once defined as the cessation of heartbeat (cardiac arrest) and of breathing, but the development of CPR and prompt defibrillation have rendered that definition inadequate because breathing and heartbeat can sometimes be restarted. Events which were causally linked to death in the past no longer kill in all circumstances; without a functioning heart or lungs, life can sometimes be sustained with a combination of life support devices, organ transplants and artificial pacemakers.
Today, where a definition of the moment of death is required, doctors and coroners usually turn to "brain death" or "biological death" to define a person as being clinically dead; people are considered dead when the electrical activity in their brain ceases. It is presumed that an end of electrical activity indicates the end of consciousness. However, suspension of consciousness must be permanent, and not transient, as occurs during certain sleep stages, and especially a coma. In the case of sleep, EEGs can easily tell the difference.
However, the category of "brain death" is seen by some scholars to be problematic. For instance, Dr. Franklin Miller, senior faculty member at the Department of Bioethics, National Institutes of Health, notes: "By the late 1990s, however, the equation of brain death with death of the human being was increasingly challenged by scholars, based on evidence regarding the array of biological functioning displayed by patients correctly diagnosed as having this condition who were maintained on mechanical ventilation for substantial periods of time. These patients maintained the ability to sustain circulation and respiration, control temperature, excrete wastes, heal wounds, fight infections and, most dramatically, to gestate fetuses (in the case of pregnant "brain-dead" women)."[12]
Those people maintaining that only the neo-cortex of the brain is necessary for consciousness sometimes argue that only electrical activity should be considered when defining death. Eventually it is possible that the criterion for death will be the permanent and irreversible loss of cognitive function, as evidenced by the death of the cerebral cortex. All hope of recovering human thought and personality is then gone given current and foreseeable medical technology. However, at present, in most places the more conservative definition of death – irreversible cessation of electrical activity in the whole brain, as opposed to just in the neo-cortex – has been adopted (for example the Uniform Determination Of Death Act in the United States). In 2005, the Terri Schiavo case brought the question of brain death and artificial sustenance to the front of American politics.
Even by whole-brain criteria, the determination of brain death can be complicated. EEGs can detect spurious electrical impulses, while certain drugs, hypoglycemia, hypoxia, or hypothermia can suppress or even stop brain activity on a temporary basis. Because of this, hospitals have protocols for determining brain death involving EEGs at widely separated intervals under defined conditions.
Legal
See also: Legal death
A dead Confederate soldier sprawled out in Petersburg, Virginia, 1865, during the American Civil War
In the United States, a person is dead by law if a Statement of Death or Death certificate is approved by a licensed medical practitioner. Various legal consequences follow death, including the removal from the person of what in legal terminology is called personhood.
The possession of brain activities, or capability to resume brain activity, is a necessary condition to legal personhood in the United States. "It appears that once brain death has been determined ... no criminal or civil liability will result from disconnecting the life-support devices." (Dority v. Superior Court of San Bernardino County, 193 Cal.Rptr. 288, 291 (1983))
Misdiagnosed
See also: Premature burial
There are many anecdotal references to people being declared dead by physicians and then "coming back to life", sometimes days later in their own coffin, or when embalming procedures are about to begin. From the mid-18th century onwards, there was an upsurge in the public's fear of being mistakenly buried alive,[13] and much debate about the uncertainty of the signs of death. Various suggestions were made to test for signs of life before burial, ranging from pouring vinegar and pepper into the corpse's mouth to applying red hot pokers to the feet or into the rectum.[14] Writing in 1895, the physician J.C. Ouseley claimed that as many as 2,700 people were buried prematurely each year in England and Wales, although others estimated the figure to be closer to 800.[15]
In cases of electric shock, cardiopulmonary resuscitation (CPR) for an hour or longer can allow stunned nerves to recover, allowing an apparently dead person to survive. People found unconscious under icy water may survive if their faces are kept continuously cold until they arrive at an emergency room.[16] This "diving response", in which metabolic activity and oxygen requirements are minimal, is something humans share with cetaceans called the mammalian diving reflex.[16]
As medical technologies advance, ideas about when death occurs may have to be re-evaluated in light of the ability to restore a person to vitality after longer periods of apparent death (as happened when CPR and defibrillation showed that cessation of heartbeat is inadequate as a decisive indicator of death). The lack of electrical brain activity may not be enough to consider someone scientifically dead. Therefore, the concept of information theoretical death has been suggested as a better means of defining when true death occurs, though the concept has few practical applications outside of the field of cryonics.
There have been some scientific attempts to bring dead organisms back to life, but with limited success.[17] In science fiction scenarios where such technology is readily available, real death is distinguished from reversible death.
Causes
See also: List of causes of death by rate and List of preventable causes of death
The leading cause of death in developing countries is infectious disease. The leading causes of death in developed countries are atherosclerosis (heart disease and stroke), cancer, and other diseases related to obesity and aging. These conditions cause loss of homeostasis, leading to cardiac arrest, causing loss of oxygen and nutrient supply, causing irreversible deterioration of the brain and other tissues. Of the roughly 150,000 people who die each day across the globe, about two thirds die of age-related causes.[3] In industrialized nations, the proportion is much higher, reaching 90%.[3] With improved medical capability, dying has become a condition to be managed. Home deaths, once commonplace, are now rare in the developed world.
The body of Pope John Paul II lying in state in St. Peter's Basilica, 2005
In developing nations, inferior sanitary conditions and lack of access to modern medical technology makes death from infectious diseases more common than in developed countries. One such disease is tuberculosis, a bacterial disease which killed 1.7 million people in 2004.[18] Malaria causes about 400–900 million cases of fever and 1–3 million deaths annually.[19] AIDS death toll in Africa may reach 90–100 million by 2025.[20][21]
According to Jean Ziegler, who was the United Nations Special reporter on the Right to Food from 2000 to March 2008; mortality due to malnutrition accounted for 58% of the total mortality rate in 2006. Ziegler says worldwide approximately 62 million people died from all causes and of those deaths more than 36 million died of hunger or diseases due to deficiencies in micronutrients.[22]
Tobacco smoking killed 100 million people worldwide in the 20th century and could kill 1 billion people around the world in the 21st century, a WHO Report warned.[23][24]
Many leading developed world causes of death can be postponed by diet and physical activity, but the accelerating incidence of disease with age still imposes limits on human longevity. The evolutionary cause of aging is, at best, only just beginning to be understood. It has been suggested that direct intervention in the aging process may now be the most effective intervention against major causes of death.[25]
Autopsy
An autopsy, also known as a postmortem examination or an obduction, is a medical procedure that consists of a thorough examination of a human corpse to determine the cause and manner of a person's death and to evaluate any disease or injury that may be present. It is usually performed by a specialized medical doctor called a pathologist.
Rembrandt turns an autopsy into a masterpiece: The Anatomy Lesson of Dr. Nicolaes Tulp
Autopsies are either performed for legal or medical purposes. A forensic autopsy is carried out when the cause of death may be a criminal matter, while a clinical or academic autopsy is performed to find the medical cause of death and is used in cases of unknown or uncertain death, or for research purposes. Autopsies can be further classified into cases where external examination suffices, and those where the body is dissected and an internal examination is conducted. Permission from next of kin may be required for internal autopsy in some cases. Once an internal autopsy is complete the body is generally reconstituted by sewing it back together. Autopsy is important in a medical environment and may shed light on mistakes and help improve practices.
A "necropsy" is an older term for a postmortem examination, unregulated, and not always a medical procedure. In modern times the term is more often used in the postmortem examination of the corpses of animals.
Life extension
Main article: Life extension
Life extension refers to an increase in maximum or average lifespan, especially in humans, by slowing down or reversing the processes of aging. Average lifespan is determined by vulnerability to accidents and age or lifestyle-related afflictions such as cancer, or cardiovascular disease. Extension of average lifespan can be achieved by good diet, exercise and avoidance of hazards such as smoking. Maximum lifespan is determined by the rate of aging for a species inherent in its genes. Currently, the only widely recognized method of extending maximum lifespan is calorie restriction. Theoretically, extension of maximum lifespan can be achieved by reducing the rate of aging damage, by periodic replacement of damaged tissues, or by molecular repair or rejuvenation of deteriorated cells and tissues.
Researchers of life extension are a subclass of biogerontologists known as "biomedical gerontologists". They try to understand the nature of aging and they develop treatments to reverse aging processes or to at least slow them down, for the improvement of health and the maintenance of youthful vigor at every stage of life. Those who take advantage of life extension findings and seek to apply them upon themselves are called "life extensionists" or "longevists". The primary life extension strategy currently is to apply available anti-aging methods in the hope of living long enough to benefit from a complete cure to aging once it is developed, which given the rapidly advancing state of biogenetic and general medical technology, could conceivably occur within the lifetimes of people living today.
Location
Before about 1930, most people died in their own homes, surrounded by family, and comforted by clergy, neighbors, and doctors making house calls.[26] By the mid-20th century, half of all Americans died in a hospital.[27] By the start of the 21st century, only about 20 to 25% of people in developed countries died in the community.[27][28][29] The shift away from dying at home, towards dying in a professionalized medical environment, has been termed the "Invisible Death".[27]
Society and culture
Main article: Death and culture
The regent duke Charles (later king Charles IX of Sweden) insulting the corpse of Klaus Fleming. Albert Edelfelt, 1878.
Dead bodies can be mummified either naturally, as this one from Guanajuato, or by intention, as those in ancient Egypt.
Death is the center of many traditions and organizations; customs relating to death are a feature of every culture around the world. Much of this revolves around the care of the dead, as well as the afterlife and the disposal of bodies upon the onset of death. The disposal of human corpses does, in general, begin with the last offices before significant time has passed, and ritualistic ceremonies often occur, most commonly interment or cremation. This is not a unified practice, however, as in Tibet for instance the body is given a sky burial and left on a mountain top. Proper preparation for death and techniques and ceremonies for producing the ability to transfer one's spiritual attainments into another body (reincarnation) are subjects of detailed study in Tibet.[30] Mummification or embalming is also prevalent in some cultures, to retard the rate of decay.
Legal aspects of death are also part of many cultures, particularly the settlement of the deceased estate and the issues of inheritance and in some countries, inheritance taxation.
Gravestones in Kyoto, Japan
Capital punishment is also a culturally divisive aspect of death. In most jurisdictions where capital punishment is carried out today, the death penalty is reserved for premeditated murder, espionage, treason, or as part of military justice. In some countries, sexual crimes, such as adultery and sodomy, carry the death penalty, as do religious crimes such as apostasy, the formal renunciation of one's religion. In many retentionist countries, drug trafficking is also a capital offense. In China human trafficking and serious cases of corruption are also punished by the death penalty. In militaries around the world courts-martial have imposed death sentences for offenses such as cowardice, desertion, insubordination, and mutiny.[31]
Death in warfare and in suicide attack also have cultural links, and the ideas of dulce et decorum est pro patria mori, mutiny punishable by death, grieving relatives of dead soldiers and death notification are embedded in many cultures. Recently in the western world, with the supposed increase in terrorism following the September 11 attacks, but also further back in time with suicide bombings, kamikaze missions in World War II and suicide missions in a host of other conflicts in history, death for a cause by way of suicide attack, and martyrdom have had significant cultural impacts.
Suicide in general, and particularly euthanasia, are also points of cultural debate. Both acts are understood very differently in different cultures. In Japan, for example, ending a life with honor by seppuku was considered a desirable death, whereas according to traditional Christian and Islamic cultures, suicide is viewed as a sin. Death is personified in many cultures, with such symbolic representations as the Grim Reaper, Azrael and Father Time.
In biology
After death the remains of an organism become part of the biogeochemical cycle. Animals may be consumed by a predator or a scavenger. Organic material may then be further decomposed by detritivores, organisms which recycle detritus, returning it to the environment for reuse in the food chain. Examples of detritivores include earthworms, woodlice and dung beetles.
Microorganisms also play a vital role, raising the temperature of the decomposing matter as they break it down into yet simpler molecules. Not all materials need to be decomposed fully, however. Coal, a fossil fuel formed over vast tracts of time in swamp ecosystems, is one example.
Natural selection
Main articles: competition (biology), natural selection, and extinction
Contemporary evolutionary theory sees death as an important part of the process of natural selection. It is considered that organisms less adapted to their environment are more likely to die having produced fewer offspring, thereby reducing their contribution to the gene pool. Their genes are thus eventually bred out of a population, leading at worst to extinction and, more positively, making the process possible, referred to as speciation. Frequency of reproduction plays an equally important role in determining species survival: an organism that dies young but leaves numerous offspring displays, according to Darwinian criteria, much greater fitness than a long-lived organism leaving only one.
Extinction
Main article: Extinction
A dodo, the bird that became a byword in English for species extinction[32]
Extinction is the cessation of existence of a species or group of taxa, reducing biodiversity. The moment of extinction is generally considered to be the death of the last individual of that species (although the capacity to breed and recover may have been lost before this point). Because a species' potential range may be very large, determining this moment is difficult, and is usually done retrospectively. This difficulty leads to phenomena such as Lazarus taxa, where a species presumed extinct abruptly "reappears" (typically in the fossil record) after a period of apparent absence. New species arise through the process of speciation, an aspect of evolution. New varieties of organisms arise and thrive when they are able to find and exploit an ecological niche – and species become extinct when they are no longer able to survive in changing conditions or against superior competition.
Evolution of aging
Main article: Evolution of ageing
Inquiry into the evolution of aging aims to explain why so many living things and the vast majority of animals weaken and die with age (a notable exception being hydra, which may be biologically immortal). The evolutionary origin of senescence remains one of the fundamental puzzles of biology. Gerontology specializes in the science of human aging processes.
ARTS
Art is a global activity which encompasses a host of disciplines, as evidenced by the range of words and phrases which have been invented to describe its various forms. Examples of such phraseology include: Fine Arts, Liberal Arts, Visual Arts, Decorative Arts, Applied Arts, Design, Crafts, Performing Arts, and so on.
The term art commonly refers to the "Visual Arts", as an abbreviation of creative art or fine art. For example, the history of art is described as "the history of the visual arts of painting, sculpture and architecture. It is the history of one of the fine arts, others of which are the performing arts and literature. It is also one of the humanities. The term sometimes encompasses theory of the visual arts, including aesthetics." In the article for fine art, we read:
Confusion often occurs when people mistakenly refer to the Fine Arts but mean the Performing Arts (Music, Dance, Drama, etc.). However, there is some disagreement here: e.g., at York University (Toronto, Canada) Fine Arts is a faculty that includes the [visual arts], design and the "Performing Arts".[5] Furthermore, creative writing is frequently considered a fine art as well.
To illustrate the previous statements, the College of Fine Arts at Stephen F. Austin State University (Nacogdoches, TX) consists of the Schools of "Art, Music and Theatre",[6] while one of the Bachelor of Fine Arts degrees at the University of British Columbia is attached to the Creative Writing Program.[7]
More work would be required to standardize the use of the terms "art" and "fine art", but for the purpose of this article the definition of "the arts" is not problematic, because it includes all the arts. One artist has even suggested that "[it] would really simplify matters if we could all just stick with visual, auditory, performance or literary – when we speak of The Arts – and eliminate “Fine” altogether".[8]
[edit] History
For all intents and purposes, the history of the arts begins with the history of art, as dealt with elsewhere. Furthermore, the history of the Performing Arts and Literature have been described in other articles --(Please see: Outline of performing arts; History of literature). Some examples of creative art through the ages can be summarized here, as excerpted from the history of art.
Ancient Greek art saw the veneration of the animal form and the development of equivalent skills to show musculature, poise, beauty and anatomically correct proportions. Ancient Roman art depicted gods as idealized humans, shown with characteristic distinguishing features (i.e. Zeus' thunderbolt).
In Byzantine and Gothic art of the Middle Ages, the dominance of the church insisted on the expression of biblical and not material truths.
Eastern art has generally worked in a style akin to Western medieval art, namely a concentration on surface patterning and local colour (meaning the plain colour of an object, such as basic red for a red robe, rather than the modulations of that colour brought about by light, shade and reflection). A characteristic of this style is that the local colour is often defined by an outline (a contemporary equivalent is the cartoon). This is evident in, for example, the art of India, Tibet and Japan.
An artist's palette
Religious Islamic art forbids iconography, and expresses religious ideas through geometry instead.
The physical and rational certainties depicted by the 19th-century Enlightenment were shattered not only by new discoveries of relativity by Einstein [1] and of unseen psychology by Freud,[2] but also by unprecedented technological development. Paradoxically the expressions of new technologies were greatly influenced by the ancient tribal arts of Africa and Oceania, through the works of Paul Gauguin and the Post-Impressionists, Pablo Picasso and the Cubists, as well as the Futurists and others.By Arun
[edit] The various arts
In the Middle Ages, Artes Liberales (liberal arts) taught in medieval universities as part of the Trivium: (grammar, rhetoric, and logic) and the Quadrivium (arithmetic, geometry, music, and astronomy), and the Artes Mechanicae (mechanical arts) such as metalworking, farming, cooking, business and the making of clothes or cloth. The modern distinctions between "artistic" and non-artistic skills did not develop until the Renaissance.
In modern academia, the arts are usually grouped with or a subset of the Humanities. Some subjects in the Humanities are history, linguistics, literature, and philosophy. Newspapers typically include a section on the arts.
Traditionally, the arts are classified as seven although the list has been expanded to nine. These being Architecture, Sculpture, Painting, Music, Poetry, Dance, Theater/Cinema, with the modern non-traditional additions of Photography[9] and Comics[10].
[edit] Visual arts
WPVA-khamsa.png Visual arts portal
Main article: Fine art
Further information: Plastic arts, Work of art
[edit] Drawing
Main article: Drawing
Drawing is a means of making an image, using any of a wide variety of tools and techniques. It generally involves making marks on a surface by applying pressure from a tool, or moving a tool across a surface. Common tools are graphite pencils, pen and ink, inked brushes, wax color pencils, crayons, charcoals, pastels, and markers. Digital tools which simulate the effects of these are also used. The main techniques used in drawing are: line drawing, hatching, crosshatching, random hatching, scribbling, stippling, and blending. An artist who excels in drawing is referred to as a draftswoman or draughtsman.
[edit] Gastronomy
Main article: Gastronomy
Gastronomy is the study of the relationship between culture and food. It is often thought erroneously that the term gastronomy refers exclusively to the art of cooking (see Culinary art), but this is only a small part of this discipline; it cannot always be said that a cook is also a gourmet. Gastronomy studies various cultural components with food as its central axis. Thus it is related to the Fine Arts and Social Sciences, and even to the Natural Sciences in terms of the digestive system of the human body.
[edit] Architecture
Main article: Architecture
The Parthenon on top of the Acropolis, Athens, Greece
Architecture (from Latin, architectura and ultimately from Greek, αρχιτεκτων, "a master builder", from αρχι- "chief, leader" and τεκτων, "builder, carpenter")[3] is the art and science of designing buildings and structures.
A wider definition would include within its scope the design of the total built environment, from the macrolevel of town planning, urban design, and landscape architecture to the microlevel of creating furniture. Architectural design usually must address both feasibility and cost for the builder, as well as function and aesthetics for the user.
Table of architecture, Cyclopaedia, 1728
In modern usage, architecture is the art and discipline of creating an actual, or inferring an implied or apparent plan of any complex object or system. The term can be used to connote the implied architecture of abstract things such as music or mathematics, the apparent architecture of natural things, such as geological formations or the structure of biological cells, or explicitly planned architectures of human-made things such as software, computers, enterprises, and databases, in addition to buildings. In every usage, an architecture may be seen as a subjective mapping from a human perspective (that of the user in the case of abstract or physical artifacts) to the elements or components of some kind of structure or system, which preserves the relationships among the elements or components.
Planned architecture manipulates space, volume, texture, light, shadow, or abstract elements in order to achieve pleasing aesthetics. This distinguishes it from applied science or engineering, which usually concentrate more on the functional and feasibility aspects of the design of constructions or structures.
In the field of building architecture, the skills demanded of an architect range from the more complex, such as for a hospital or a stadium, to the apparently simpler, such as planning residential houses. Many architectural works may be seen also as cultural and political symbols, and/or works of art. The role of the architect, though changing, has been central to the successful (and sometimes less than successful) design and implementation of pleasingly built environments in which people live.
[edit] Painting
Main article: Painting
The Mona Lisa is one of the most recognizable artistic paintings in the Western world.
Painting taken literally is the practice of applying pigment suspended in a vehicle (or medium) and a binding agent (a glue) to a surface (support) such as paper, canvas, wood panel or a wall. However, when used in an artistic sense it means the use of this activity in combination with drawing, composition and other aesthetic considerations in order to manifest the expressive and conceptual intention of the practitioner. Painting is also used to express spiritual motifs and ideas; sites of this kind of painting range from artwork depicting mythological figures on pottery to The Sistine Chapel to the human body itself.
Colour is the essence of painting as sound is of music. Colour is highly subjective, but has observable psychological effects, although these can differ from one culture to the next. Black is associated with mourning in the West, but elsewhere white may be. Some painters, theoreticians, writers and scientists, including Goethe, Kandinsky, Newton, have written their own colour theory. Moreover the use of language is only a generalization for a colour equivalent. The word "red", for example, can cover a wide range of variations on the pure red of the spectrum. There is not a formalized register of different colours in the way that there is agreement on different notes in music, such as C or C#, although the Pantone system is widely used in the printing and design industry for this purpose.
Modern artists have extended the practice of painting considerably to include, for example, collage. This began with Cubism and is not painting in strict sense. Some modern painters incorporate different materials such as sand, cement, straw or wood for their texture. Examples of this are the works of Jean Dubuffet or Anselm Kiefer.
Modern and contemporary art has moved away from the historic value of craft in favour of concept; this has led some to say that painting, as a serious art form, is dead, although this has not deterred the majority of artists from continuing to practise it either as whole or part of their work.
[edit] Conceptual art
Main article: Conceptual art
Conceptual art is art in which the concept(s) or idea(s) involved in the work take precedence over traditional aesthetic and material concerns. The inception of the term in the 1960s referred to a strict and focused practice of idea-based art that often defied traditional visual criteria associated with the visual arts in its presentation as text. However, through its association with the Young British Artists and the Turner Prize during the 1990s, its popular usage, particularly in the UK, developed as a synonym for all contemporary art that does not practise the traditional skills of painting and sculpture.[11]
[edit] Video games
Main article: Video game
A debate exists in the fine arts and video game cultures over whether video games can be counted as an art form.[12] Some cite games such as Shadow of the Colossus and Myst as prime examples of video games as an art form. [13] Others, such as game designer Hideo Kojima, profess that video games are a type of service, not an art form. [14]
In May of 2011, the National Endowment of the Arts included video games in its redefinition of what is considered a work of art.[15]
[edit] Literary arts
Main articles: Language and Literature
Shakespeare wrote some of the best known works in English literature.
Literature is literally "acquaintance with letters" as in the first sense given in the Oxford English Dictionary (from the Latin littera meaning "an individual written character (letter)"). The term has generally come to identify a collection of writings, which in Western culture are mainly prose, both fiction and non-fiction, drama and poetry. In much, if not all of the world, texts can be oral as well, and include such genres as epic, legend, myth, ballad, other forms of oral poetry, and as folktale.
[edit] Performing arts
Main article: Performing arts
The performing arts differ from the plastic arts insofar as the former uses the artist's own body, face, presence as a medium, and the latter uses materials such as clay, metal or paint which can be molded or transformed to create some art object.
Performing arts include acrobatics, busking, comedy, dance, magic, music, opera, operetta, film, juggling, martial arts, marching arts such as brass bands and theatre.
Artists who participate in these arts in front of an audience are called performers, including actors, comedians, dancers, musicians, and singers. Performing arts are also supported by workers in related fields, such as songwriting and stagecraft.
Performers often adapt their appearance, such as with costumes and stage makeup, etc.
There is also a specialized form of fine art in which the artists perform their work live to an audience. This is called Performance art. Dance was often referred to as a plastic art during the Modern dance era.
[edit] Music
Main article: Music
A musical score by Mozart.
Music is an art form whose medium is sound. Common elements of music are pitch (which governs melody and harmony), rhythm (and its associated concepts tempo, meter, and articulation), dynamics, and the sonic qualities of timbre and texture. The creation, performance, significance, and even the definition of music vary according to culture and social context. Music ranges from strictly organized compositions (and their recreation in performance), through improvisational music to aleatoric forms. Music can be divided into genres and subgenres, although the dividing lines and relationships between music genres are often subtle, sometimes open to individual interpretation, and occasionally controversial. Within "the arts", music may be classified as a performing art, a fine art, and auditory art.
[edit] Theatre
Main article: Theatre
Theatre or theater (Greek "theatron", θέατρον) is the branch of the performing arts concerned with acting out stories in front of an audience using combinations of speech, gesture, music, dance, sound and spectacle — indeed any one or more elements of the other performing arts. In addition to the standard narrative dialogue style, theatre takes such forms as opera, ballet, mime, kabuki, classical Indian dance, Chinese opera and mummers' plays.
[edit] Dance
A Ballroom dance exhibition
Main article: Dance
Dance (from Old French dancier, perhaps from Frankish) generally refers to human movement either used as a form of expression or presented in a social, spiritual or performance setting.
Dance is also used to describe methods of non-verbal communication (see body language) between humans or animals (bee dance, mating dance), motion in inanimate objects (the leaves danced in the wind), and certain musical forms or genres.
Choreography is the art of making dances, and the person who does this is called a choreographer. People danced to relieve stress.
Definitions of what constitutes dance are dependent on social, cultural, aesthetic, artistic and moral constraints and range from functional movement (such as Folk dance) to codified, virtuoso techniques such as ballet. In sports, gymnastics, figure skating and synchronized swimming are dance disciplines while Martial arts 'kata' are often compared to dances.
[edit] Arts criticism
Architecture criticism
Visual art criticism
Dance criticism
Film criticism
Literary criticism
Music journalism
Television criticism
Theatre criticism
[edit] See also
Nuvola apps package graphics.png Arts portal
Culinary art
Fine art
Martial arts
Performing arts
For other uses, see The arts (disambiguation).
Art in odd places
[edit] Notes
^ For example here is the Art (singular) History department of Chicago which explicitly refers to "visual arts" on its welcome page.
^ For example here is the UNC School of the Arts (plural) which offers dance, design, drama and so on.
^ http://www.thefreedictionary.com/arts Entry on The Free Dictionary provided by Collins English Dictionary
^ http://www.visual-arts-cork.com/art-definition.htm#definition A Working Definition of Art (2009)
^ Faculty of Fine Arts, York University
^ College of Fine Arts (Stephen F. Austin State University)
^ The Creative Writing Program at UBC
^ An About.com article by artist and educator, Shelley Esaak, answering the question: What Is Visual Art? in relation to the other arts.
^ Keppler, Victor (in English). A life of color photography: The eighth art. W. Morrow & Co. ASIN B00085HDEI.
^ Dierick, Charles (in Dutch). Het Belgisch Centrum van het Beeldverhaal. Brussels: Dexia Bank / La Renaissance du Livre. p. 11. ISBN 2-8046-0449-7.
^ Turner prize history: Conceptual art Tate gallery tate.org.uk. Accessed August 8, 2006
^ "From the Archives: Going Through Game Informer's Past". Game Informer (200): 83. December 2009.
^ Ebert, Roger. "Okay, kids, play on my lawn". Chicago Sun-Times.
^ "Kojima Says "Games Are Not Art"". Retrieved 2011-01-06. Kotaku (2006)
^ "US Government Declares 'Video Games Are Art'". International Business Times. 13 May 2011. Retrieved 24 August 2011.
[edit] References
Does time fly?—Peter Galison's Empires of Time, a historical survey of Einstein and Poincare, intrigues Jon Turney. Jon Turney. The Guardian, Saturday 6 September 2003
Contradictions of the Enlightenment: Darwin, Freud, Einstein
[edit] External links
ExtremeEngineering
Wednesday, August 31, 2011
BIGBANGTHEORY
Tuesday, August 30, 2011
civilengineering
Until modern times there was no clear distinction between civil engineering and architecture, and the term engineer and architect were mainly geographical variations referring to the same person, often used interchangeably.[7] The construction of Pyramids in Egypt (circa 2700-2500 BC) might be considered the first instances of large structure constructions. Other ancient historic civil engineering constructions include the Qanat water management system (the oldest older than 3000 years and longer than 71 km[8],) the Parthenon by Iktinos in Ancient Greece (447-438 BC), the Appian Way by Roman engineers (c. 312 BC), the Great Wall of China by General Meng T'ien under orders from Ch'in Emperor Shih Huang Ti (c. 220 BC)[6] and the stupas constructed in ancient Sri Lanka like the Jetavanaramaya and the extensive irrigation works in Anuradhapura. The Romans developed civil structures throughout their empire, including especially aqueducts, insulae, harbours, bridges, dams and roads.
The Archimedes screw was operated by hand and could raise water efficiently.
In the 18th century, the term civil engineering was coined to incorporate all things civilian as opposed to military engineering.[5] The first self-proclaimed civil engineer was John Smeaton who constructed the Eddystone Lighthouse.[4][6] In 1771 Smeaton and some of his colleagues formed the Smeatonian Society of Civil Engineers, a group of leaders of the profession who met informally over dinner. Though there was evidence of some technical meetings, it was little more than a social society.
In 1818 the Institution of Civil Engineers was founded in London, and in 1820 the eminent engineer Thomas Telford became its first president. The institution received a Royal Charter in 1828, formally recognising civil engineering as a profession. Its charter defined civil engineering as:
the art of directing the great sources of power in nature for the use and convenience of man, as the means of production and of traffic in states, both for external and internal trade, as applied in the construction of roads, bridges, aqueducts, canals, river navigation and docks for internal intercourse and exchange, and in the construction of ports, harbours, moles, breakwaters and lighthouses, and in the art of navigation by artificial power for the purposes of commerce, and in the construction and application of machinery, and in the drainage of cities and towns.[9]
The first private college to teach Civil Engineering in the United States was Norwich University founded in 1819 by Captain Alden Partridge.[10] The first degree in Civil Engineering in the United States was awarded by Rensselaer Polytechnic Institute in 1835.[11] The first such degree to be awarded to a woman was granted by Cornell University to Nora Stanton Blatch in 1905.[12]
ElectronicRobert
A robot is a mechanical intelligent agent which can perform tasks on its own, or with guidance. In practice a robot is usually an electro-mechanical machine which is guided by computer and electronic programming. Robots can be autonomous or semi-autonomous and come in those two basic types: those which are used for research into human-like systems, such as ASIMO and TOPIO, as well as those into more defined and specific roles, such as Nano robots and Swarm robots; and helper robots which are used to make or move things or perform menial or dangerous tasks, such as Industrial robots or mobile or servicing robots. Another common characteristic is that, by its appearance or movements, a robot often conveys a sense that it has intent or agency of its own.
When societies first began developing, nearly all production and effort was the result of human labour, as well as with the aid of semi- and fully domesticated animals. As mechanical means of performing functions were discovered, and mechanics and complex mechanisms were developed, the need for human labour was reduced. Machinery was initially used for repetitive functions, such as lifting water and grinding grain. With technological advances more complex machines were slowly developed, such as those invented by Hero of Alexandria (in Egypt) in the 1st century AD, and the first half of the second millennium AD, such as the Automata of Al-Jazari in the 12th century AD (in medieval Iraq). They were not widely adopted as human labour, particularly slave labour, was still inexpensive compared to the capital-intensive machines. Men such as Leonardo Da Vinci in 1495 through to Jacques de Vaucanson in 1739, as well as rediscovering the Greek engineering methods, have made plans for and built automata and robots leading to books of designs such as the Japanese Karakuri zui (Illustrated Machinery) in 1796. As mechanical techniques developed through the Industrial age we find more practical applications such as Nikola Tesla in 1898, who designed a radio-controlled torpedo, and the Westinghouse Electric Corporation creation of Televox in 1926. From here we also find a more android development as designers tried to mimic more human-like features including designs such as those of biologist Makoto Nishimura in 1929 and his creation Gakutensoku, which cried and changed its facial expressions, and the more crude Elektro from Westinghouse in 1938.
Electronics then became the driving force of development instead of mechanics, with the advent of the first electronic autonomous robots created by William Grey Walter in Bristol, England, in 1948. The first digital and programmable robot was invented by George Devol in 1954 and was ultimately called the Unimate. Devol sold the first Unimate to General Motors in 1960 where it was used to lift pieces of hot metal from die casting machines in a plant in Trenton, New Jersey. Since then we have seen robots finally reach a more true assimilation of all technologies to produce robots such as ASIMO which can walk and move like a human. Robots have replaced slaves in the assistance of performing those repetitive and dangerous tasks which humans prefer not to do, or are unable to do due to size limitations, or even those such as in outer space or at the bottom of the sea where humans could not survive the extreme environments.
Man has developed an awareness of the problems associated with autonomous robots and how they may act in society. Fear of robot behaviour, such as Shelley's Frankenstein and the EATR, drive current practice in establishing what autonomy a robot should and should not be capable of. Thinking has developed through discussion of robot control and artificial intelligence (AI) and how its application should benefit society, such as those based around Asimov's three laws. Practicality still drives development forwards and robots are used in an increasingly wide variety of tasks such as vacuuming floors, mowing lawns, cleaning drains, investigating other planets, building cars, in entertainment and in warfare.
Friday, April 8, 2011
Descovery of Electrons
J.J. Thomson became the third Cavendish Professor of Experimental Physics in 1884. One of the phenomena he studied was the conduction of electricity through gases.
One subject which interested Thomson was cathode rays. These rays are emitted at the cathode, or negative terminal, in a discharge tube. In 1879 Crookes had proposed that the cathode rays were 'radiant matter', negatively charged particles that were repelled from the negatively charged cathode and attracted to the positively charged anode.
The nature of the cathode rays was controversial. Although Thomson thought the rays must be particles, many Europeans thought they were an 'etherial disturbance', like light. In Germany Hertz had observed the rays passing through thin sheets of gold. It seemed impossible that particles could pass through solid matter.
Hertz had also found (wrongly) that the rays were not deflected by electric fields. In 1897 Thomson repeated Hertz's experiment.J.J. Thomson had balanced the cathode rays between the electric and magnetic forces.
The force (F) on a charged object in an electric field depends on the strength of the electric field (E), multipled by the charge (q) on the object.
F = Eq
The force (F) on a charged object in a magnetic field depends on the strength of the magnetic field (B), multipled by both the charge (q) and velocity (v) of the object.
F = Bqv
Since the forces were balanced:
Eq = Bqv
v = E/B
The velocity was equal to the electric field strength divided by the magnetic field strength. Thomson could measure these field strengths and use them to calculate the velocity of the rays.
Thursday, April 7, 2011
Technical Education
Lincoln College of New England graduates have an excellent reputation among employers and are poised for success. On-campus housing is available at our three distinct small, campuses in Hartford, Southington and Suffield, CT.
Lincoln College of New England offers more than 30 undergraduate degree programs within the fields of Health Sciences, Business, Communications, Hospitality, and more. Bachelor Degrees, Associate Degrees, and Certificates are available depending on program area.
Nepal rich in hydropower
There are three major rivers in Nepal namely Kosi River, Gandaki River and Karnali River which lie across east to west running from north to south. Surroundings of most rivers are in their natural settings. Nepali rivers are paradise to River Rafters who just can't have enough of angry and mad rivers. Need we mention Himalayan Water? It's all here in this beautiful country. No matter how many rivers you have rafted here, there is always a river waiting to be explored.Many of Nepal's rivers such as the Karnali, Seti, and Gandaki are fueled by the Himalayas. These rivers rush through 8848m altitude from sea level to 60m. Extreme elevation of the land helps these rivers fly! And they carry water to generate more than 90,000 mW of electricity. Currently Nepal produces less than 2% of it's capacity. So why hasn't anything been done to get closer to 98% of this open business ?
Many small sized hydro power plants are being currently setup. Lack of infrastructure such as roads, government policy, war and conflict in the region has slowed down many projects.
In Nepal, there are more plans than actions. There are plans to elevate poverty, such plans go through numbers like these.. Plan 1 to Plan 20. There are also plans to setup hydro-power projects to make nepal sufficient of electricity and also earn foreign revenue by selling it. Hydro power Plans have similar numbers like Plan 1 and Plan 2 and so on. They are as boring as the talks of political leaders. Everybody loves pointing their fingers at the other Government, and Every Government operates for about an year before it is replaced by another. When the new Government comes to office, they argue why plan-numbers were not long, so they add Plan 21 through Plan 9999 before saying good bye! In last 10 years alone, Nepal had more than 10 different governments, about one new government per year.
In case you were interested, there are hydro-power plans for upto year 2030, by which they believe Nepal will produce enough electricity for the entire country as well as start making some money by selling it!
Many small to medium sized, some privately owned hydropower plants are being setup in many part of the country, proving to all foreign investors that Nepal's rivers are good for business. Read about Nepal's War and how Nepal is unfolding, some argue is it really ? Also visit blogs by Nepalese who have good coverage on what Nepali Government really is. See Web Directory > Nepali Blogs
Also check out this nice PDF File which has Nepal Power Development Map, lot of small to large scale hydro power projects, some active and some just sleeping ones
Einstien photoelectric effect
In the photoelectric effect, electrons are emitted from matter (metals and non-metallic solids, liquids or gases) as a consequence of their absorption of energy from electromagnetic radiation of very short wavelength, such as visible or ultraviolet light. Electrons emitted in this manner may be referred to as "photoelectrons".First observed by Heinrich Hertz in 1887, the phenomenon is also known as the "Hertz effect", although the latter term has fallen out of general use. Hertz observed and then showed that electrodes illuminated with ultraviolet light create electric sparks more easily.
The photoelectric effect requires photons with energies from a few electronvolts to over 1 MeV in high atomic number elements. Study of the photoelectric effect led to important steps in understanding the quantum nature of light and electrons and influenced the formation of the concept of wave–particle duality. Other phenomena where light affects the movement of electric charges include the photoconductive effect (also known as photoconductivity or photoresistivity), the photovoltaic effect, and the photoelectrochemical effect.
The photons of a light beam have a characteristic energy determined by the frequency of the light. In the photoemission process, if an electron within some material absorbs the energy of one photon and thus has more energy than the work function (the electron binding energy) of the material, it is ejected. If the photon energy is too low, the electron is unable to escape the material. Increasing the intensity of the light beam increases the number of photons in the light beam, and thus increases the number of electrons excited, but does not increase the energy that each electron possesses. The energy of the emitted electrons does not depend on the intensity of the incoming light, but only on the energy or frequency of the individual photons. It is an interaction between the incident photon and the outermost electron.
Electrons can absorb energy from photons when irradiated, but they usually follow an "all or nothing" principle. All of the energy from one photon must be absorbed and used to liberate one electron from atomic binding, or else the energy is re-emitted. If the photon energy is absorbed, some of the energy liberates the electron from the atom, and the rest contributes to the electron's kinetic energy as a free particle.[citation needed]
Experimental results of the photoelectric emission
For a given metal and frequency of incident radiation, the rate at which photoelectrons are ejected is directly proportional to the intensity of the incident light.
For a given metal, there exists a certain minimum frequency of incident radiation below which no photoelectrons can be emitted. This frequency is called the threshold frequency.
For a given metal of particular work function, increase in intensity of incident beam increases the magnitude of the photoelectric current, though stoppage voltage remains the same.
For a given metal of particular work function, increase in frequency of incident beam increases the maximum kinetic energy with which the photoelectrons are emitted, but the photoelectric current remains the same, though stoppage voltage increases.
Above the threshold frequency, the maximum kinetic energy of the emitted photoelectron depends on the frequency of the incident light, but is independent of the intensity of the incident light so long as the latter is not too high [5]
The time lag between the incidence of radiation and the emission of a photoelectron is very small, less than 10−9 second.
The direction of distribution of emitted electrons peaks in the direction of polarization (the direction of the electric field) of the incident light, if it is linearly polarized.[citation needed]
Mathematical description
The maximum kinetic energy Kmax of an ejected electron is given by
where h is the Planck constant, f is the frequency of the incident photon, and φ = hf0 is the work function (sometimes denoted W), which is the minimum energy required to remove a delocalised electron from the surface of any given metal. The work function, in turn, can be written as
where f0 is called the threshold frequency for the metal. The maximum kinetic energy of an ejected electron is
Because the kinetic energy of the electron must be positive, it follows that the frequency f of the incident photon must be greater than f0 in order for the photoelectric effect to occur
Stopping potential
The relation between current through an illuminated photoelectric system and applied voltage illustrates the nature of the photoelectric effect. For discussion, a plate P is illuminated by a light source, and any emitted electrons are collected at another plate electrode Q. The potential between P and Q can be varied and the current flowing in the external circuit between P and Q is measured.
If the frequency and the intensity of the incident radiation are kept fixed, it is found that the photoelectric current increases gradually with the increase in positive potential until all the photoelectrons emitted are collected. The photoelectric current attains saturation value and it does not increase further for any increase in the positive potential. The saturation current depends on the intensity of illumination, but not its wavelength.
If we apply negative potential to plate Q with respect to plate P, and increases it gradually we note that photoelectric current decreases rapidly until it is zero, at a certain negative potential on plate Q.The minimum negative potential given to plate Q at which the photoelectric current becomes zero is called stopping potential or cut off potential.[7] i. For the given frequency of incident radiation, the stopping potential is independent of its intensity.
ii. For a given frequency of the incident radiation, the stopping potential V0 if related to the maximum kinetic energy of the photoelectron that is just stopped from reaching plate Q.
If m is the mass and vmax is the maximum velocity of photoelectron emitted, then
If e is the charge on the electron and V0is the stopping potential, then work done by the retarding potential in stopping the electron = eV0.
Therefore, we have, 1/2mv2max = eV0
The above relation shows that the maximum velocity of the emitted photoelectron is independent of the intensity of the incident light.
Hence, we have the next equality:
Kmax = eV0
The stopping voltage varies linearly with frequency of light, but depends on the type of material. For any particular material, there is a threshold frequency that must be exceeded, independent of light intensity, to observe any electron emission.
Three-step model
In the X-ray regime, the photoelectric effect in crystalline material is often decomposed into three steps:[8]
Inner photoelectric effect (see photodiode below). The hole left behind can give rise to auger effect, which is visible even when the electron does not leave the material. In molecular solids phonons are excited in this step and may be visible as lines in the final electron energy. The inner photoeffect has to be dipole allowed. The transition rules for atoms translate via the tight-binding model onto the crystal. They are similar in geometry to plasma oscillations in that they have to be transversal.
Ballistic transport of half of the electrons to the surface. Some electrons are scattered.
Electrons escape from the material at the surface.
In the three-step model, an electron can take multiple paths through these three steps. All paths can interfere in the sense of the path integral formulation. For surface states and molecules the three-step model does still make some sense as even most atoms have multiple electrons which can scatter the one electron leaving.[citation needed]
Wednesday, April 6, 2011
Hurricane(Destruction)
Hurricane Katrina of the 2005 Atlantic hurricane season was the costliest natural disaster, as well as one of the five deadliest hurricanes, in the history of the United States.[2] Among recorded Atlantic hurricanes, it was the sixth strongest overall. At least 1,836 people died in the actual hurricane and in the subsequent floods, making it the deadliest U.S. hurricane since the 1928 Okeechobee hurricane; total property damage was estimated at $81 billion (2005 USD) nearly triple the damage wrought by Hurricane Andrew in 1992.
Hurricane Katrina formed over the Bahamas on August 23, 2005 and crossed southern Florida as a moderate Category 1 hurricane, causing some deaths and flooding there before strengthening rapidly in the Gulf of Mexico. The storm weakened before making its second landfall as a Category storm on the morning of Monday, August 29 in southeast Louisiana. It caused severe destruction along the Gulf coast from central Florida to Texas, much of it due to the storm surge. The most significant amount of deaths occurred in New Orleans, Louisiana, which flooded as the levee system catastrophically failed, in many cases hours after the storm had moved inland. Eventually 80% of the city and large tracts of neighboring parishes became flooded, and the floodwaters lingered for weeks. However, the worst property damage occurred in coastal areas, such as all Mississippi beachfront towns, which were flooded over 90% in hours, as boats and casino barges rammed buildings, pushing cars and houses inland, with waters reaching 6–12 miles (10–19 km) from the beach.
The hurricane protection failures in New Orleans prompted a lawsuit against the US Army Corps of Engineers (USACE), the designers and builders of the levee system as mandated in the Flood Control Act of 1965. Responsibility for the failures and flooding was laid squarely on the Army Corps in January 2008, but the federal agency could not be held financially liable due to sovereign immunity in the Flood Control Act of 1928. There was also an investigation of the responses from federal, state and local governments, resulting in the resignation of Federal Emergency Management Agency (FEMA) director Michael D. Brown, and of New Orleans Police Department (NOPD) Superintendent Eddie Compass. Conversely, the United States Coast Guard (USCG), National Hurricane Center (NHC) and National Weather Service (NWS) were widely commended for their actions, accurate forecasts and abundant lead time.
Five years later, thousands of displaced residents in Mississippi and Louisiana are still living in temporary accommodation. Reconstruction of each section of the southern portion of Louisiana has been addressed in the Army Corps of Engineers LACPR Final Technical Report which identifies areas not to be rebuilt and areas and buildings that need to be elevated.
Black hole
A black hole is a region of space from which nothing, not even light, can escape. The theory of general relativity predicts that a sufficiently compact mass will deform spacetime to form a black hole. Around a black hole there is an undetectable surface called an event horizon that marks the point of no return. It is called "black" because it absorbs all the light that hits the horizon, reflecting nothing, just like a perfect black body in thermodynamics.[1] Quantum mechanics predicts that black holes emit radiation like a black body with a finite temperature. This temperature is inversely proportional to the mass of the black hole, making it difficult to observe this radiation for black holes of stellar mass or greater.
Objects whose gravity field is too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. The first modern prediction of a black hole in general relativity was found by Karl Schwarzschild in 1916, although its interpretation as a black hole was not fully appreciated for another four decades. Long considered a mathematical curiosity, it was during the 1960s that theoretical work showed black holes were a generic prediction of general relativity. The discovery of neutron stars sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality.
Black holes of stellar mass are expected to form when heavy stars collapse in a supernova at the end of their life cycle. After a black hole has formed it can continue to grow by absorbing mass from its surroundings. By absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may be formed.
Despite its invisible interior, the presence of a black hole can be inferred through its interaction with other matter. Astronomers have identified numerous stellar black hole candidates in binary systems, by studying their interaction with their companion stars. There is growing consensus that supermassive black holes exist in the centers of most galaxies. In particular, there is strong evidence of a black hole of more than 4 million solar masses at the center of our Milky Way.
In 1915, Albert Einstein developed his theory of general relativity, having earlier shown that gravity does influence light's motion. Only a few months later, Karl Schwarzschild found a solution to Einstein field equations, which describes the gravitational field of a point mass and a spherical mass. A few months after Schwarzschild, Johannes Droste, a student of Hendrik Lorentz, independently gave the same solution for the point mass and wrote more extensively about its properties.[ This solution had a peculiar behaviour at what is now called the Schwarzschild radius, where it became singular, meaning that some of the terms in the Einstein equations became infinite. The nature of this surface was not quite understood at the time. In 1924, Arthur Eddington showed that the singularity disappeared after a change of coordinates (see Eddington–Finkelstein coordinates), although it took until 1933 for Georges Lemaître to realize that this meant the singularity at the Schwarzschild radius was an unphysical coordinate singularity.
In 1931, Subrahmanyan Chandrasekhar calculated, using general relativity, that a non-rotating body of electron-degenerate matter above a certain limiting mass (now called the Chandrasekhar limit at 1.4 solar masses) must have an infinite density. In other words, the object must have a radius of zero. His arguments were opposed by many of his contemporaries like Eddington and Lev Landau, who argued that some yet unknown mechanism would stop the collapse.They were partly correct: a white dwarf slightly more massive than the Chandrasekhar limit will collapse into a neutron star,] which is itself stable because of the Pauli exclusion principle. But in 1939, Robert Oppenheimer and others predicted that neutron stars above approximately three solar masses (the Tolman–Oppenheimer–Volkoff limit) would collapse into black holes for the reasons presented by Chandrasekhar, and concluded that no law of physics was likely to intervene and stop at least some stars from collapsing to black holes.
Oppenheimer and his co-authors interpreted the singularity at the boundary of the Schwarzschild radius as indicating that this was the boundary of a bubble in which time stopped. This is a valid point of view for external observers, but not for infalling observers. Because of this property, the collapsed stars were called "frozen stars,"[13] because an outside observer would see the surface of the star frozen in time at the instant where its collapse takes it inside the Schwarzschild radius.
Golden age
See also: Golden age of general relativity
In 1958, David Finkelstein identified the Schwarzschild surface as an event horizon, "a perfect unidirectional membrane: causal influences can cross it in only one direction" This did not strictly contradict Oppenheimer's results, but extended them to include the point of view of infalling observers. Finkelstein's solution extended the Schwarzschild solution for the future of observers falling into a black hole. A complete extension had already been found by Martin Kruskal, who was urged to publish it.
These results came at the beginning of the golden age of general relativity, which was marked by general relativity and black holes becoming mainstream subjects of research. This process was helped by the discovery of pulsars in 1967, which were shown to be rapidly rotating neutron stars by 1969. Until that time, neutron stars, like black holes, were regarded as just theoretical curiosities; but the discovery of pulsars showed their physical relevance and spurred a further interest in all types of compact objects that might be formed by gravitational collapse.
In this period more general black hole solutions were found. In 1963, Roy Kerr found the exact solution for a rotating black hole. Two years later, Ezra Newman found the axisymmetric solution for a black hole that is both rotating and electrically charged. Through the work of Werner Israel Brandon Carter] and David Robinson[23] the no-hair theorem emerged, stating that a stationary black hole solution is completely described by the three parameters of the Kerr–Newman metric; mass, angular momentum, and electric charge.
For a long time, it was suspected that the strange features of the black hole solutions were pathological artefacts from the symmetry conditions imposed, and that the singularities would not appear in generic situations. This view was held in particular by Vladimir Belinsky, Isaak Khalatnikov, and Evgeny Lifshitz, who tried to prove that no singularities appear in generic solutions. However, in the late sixties Roger Penrose[25] and Stephen Hawking used global techniques to prove that singularities are generic.
Work by James Bardeen, Jacob Bekenstein, Carter, and Hawking in the early 1970s led to the formulation of black hole thermodynamics. These laws describe the behaviour of a black hole in close analogy to the laws of thermodynamics by relating mass to energy, area to entropy, and surface gravity to temperature. The analogy was completed when Hawking, in 1974, showed that quantum field theory predicts that black holes should radiate like a black body with a temperature proportional to the surface gravity of the black hole.
The term "black hole" was first publicly used by John Wheeler during a lecture in 1967. Although he is usually credited with coining the phrase, he always insisted that it was suggested to him by somebody else. The first recorded use of the term is in a 1964 letter by Anne Ewing to the American Association for the Advancement of Science. After Wheeler's use of the term, it was quickly adopted in general use.
Properties and structure
The no-hair theorem states that, once it achieves a stable condition after formation, a black hole has only three independent physical properties: mass, charge, and angular momentum.[24] Any two black holes that share the same values for these properties, or parameters, are indistinguishable according to classical (i.e. non-quantum) mechanics.
These properties are special because they are visible from outside a black hole. For example, a charged black hole repels other like charges just like any other charged object. Similarly, the total mass inside a sphere containing a black hole can be found by using the gravitational analog of Gauss's law, the ADM mass, far away from the black hole. Likewise, the angular momentum can be measured from far away using frame dragging by the gravitomagnetic field.
When an object falls into a black hole, any information about the shape of the object or distribution of charge on it is evenly distributed along the horizon of the black hole, and is lost to outside observers. The behavior of the horizon in this situation is closely analogous to that of a conductive stretchy membrane with friction and electrical resistance, a dissipative system (see membrane paradigm).This is different from other field theories like electromagnetism, which do not have any friction or resistivity at the microscopic level, because they are time-reversible. Because a black hole eventually achieves a stable state with only three parameters, there is no way to avoid losing information about the initial conditions: the gravitational and electric fields of a black hole give very little information about what went in. The information that is lost includes every quantity that cannot be measured far away from the black hole horizon, including the total baryon number, lepton number, and all the other nearly conserved pseudo-charges of particle physics. This behavior is so puzzling that it has been called the black hole information loss paradox.
Aomic BOmbing in Hiroshima And Nakasami
For six months, the United States had made use of intense strategic fire-bombing of 67 Japanese cities. Together with the United Kingdom and the Republic of China, the United States called for a surrender of Japan in the Potsdam Declaration on July 26, 1945. The Japanese government ignored this ultimatum. By executive order of President Harry S. Truman, the U.S. dropped the nuclear weapon "Little Boy" on the city of Hiroshima on Monday, August 6, 1945,[3][4] followed by the detonation of "Fat Man" over Nagasaki on August 9.
Within the first two to four months of the bombings, the acute effects killed 90,000–166,000 people in Hiroshima and 60,000–80,000 in Nagasaki,[1] with roughly half of the deaths in each city occurring on the first day. The Hiroshima prefectural health department estimates that, of the people who died on the day of the explosion, 60% died from flash or flame burns, 30% from falling debris and 10% from other causes. During the following months, large numbers died from the effect of burns, radiation sickness, and other injuries, compounded by illness. In a US estimate of the total immediate and short term cause of death, 15–20% died from radiation sickness, 20–30% from flash burns, and 50–60% from other injuries, compounded by illness. In both cities, most of the dead were civilians.
Six days after the detonation over Nagasaki, on August 15, Japan announced its surrender to the Allied Powers, signing the Instrument of Surrender on September 2, officially ending the Pacific War and therefore World War II. Germany had signed its Instrument of Surrender on May 7, ending the war in Europe. The bombings led, in part, to post-war Japan's adopting Three Non-Nuclear Principles, forbidding the nation from nuclear armament.[9] The role of the bombings in Japan's surrender and the U.S.'s ethical justification for them, as well as their strategical importance, is still debated.
At the time of its bombing, Hiroshima was a city of some industrial and military significance. A number of military camps were located nearby, including the headquarters of the Fifth Division and Field Marshal Shunroku Hata's 2nd General Army Headquarters, which commanded the defense of all of southern Japan.[22] Hiroshima was a minor supply and logistics base for the Japanese military. The city was a communications center, a storage point, and an assembly area for troops. It was one of several Japanese cities left deliberately untouched by American bombing, allowing a pristine environment to measure the damage caused by the atomic bomb.
The center of the city contained several reinforced concrete buildings and lighter structures. Outside the center, the area was congested by a dense collection of small wooden workshops set among Japanese houses. A few larger industrial plants lay near the outskirts of the city. The houses were constructed of wood with tile roofs, and many of the industrial buildings were also built around wood frames. The city as a whole was highly susceptible to fire damage.
The population of Hiroshima had reached a peak of over 381,000 earlier in the war, but prior to the atomic bombing the population had steadily decreased because of a systematic evacuation ordered by the Japanese government. At the time of the attack, the population was approximately 340,000–350,000.[1] Because official documents were burned, the exact population is uncertain.
The bombing
Seizo Yamada's ground level photo taken from approximately 7 km northeast of Hiroshima.
For the composition of the USAAF mission, see 509th Operations Group#Components.
Hiroshima was the primary target of the first nuclear bombing mission on August 6, with Kokura and Nagasaki being alternative targets. August 6 was chosen because clouds had previously obscured the target. The 393d Bombardment Squadron B-29 Enola Gay, piloted and commanded by 509th Composite Group commander Colonel Paul Tibbets, was launched from North Field airbase on Tinian in the West Pacific, about six hours flight time from Japan. The Enola Gay (named after Colonel Tibbets' mother) was accompanied by two other B-29s. The Great Artiste, commanded by Major Charles W. Sweeney, carried instrumentation; and a then-nameless aircraft later called Necessary Evil (the photography aircraft) was commanded by Captain George Marquardt.[
After leaving Tinian the aircraft made their way separately to Iwo Jima where they rendezvoused at 2,440 meters (8,010 ft) and set course for Japan. The aircraft arrived over the target in clear visibility at 9,855 meters (32,333 ft). During the journey, Navy Captain William Parsons had armed the bomb, which had been left unarmed to minimize the risks during takeoff. His assistant, 2nd Lt. Morris Jeppson, removed the safety devices 30 minutes before reaching the target area.
The dark portions of the garments this victim wore during the flash caused burns on their skin.[
About an hour before the bombing, Japanese early warning radar detected the approach of some American aircraft headed for the southern part of Japan. An alert was given and radio broadcasting stopped in many cities, among them Hiroshima. At nearly 08:00, the radar operator in Hiroshima determined that the number of planes coming in was very small—probably not more than three—and the air raid alert was lifted. To conserve fuel and aircraft, the Japanese had decided not to intercept small formations. The normal radio broadcast warning was given to the people that it might be advisable to go to air-raid shelters if B-29s were actually sighted. However a reconnaissance mission was assumed because at 07.31 the first B29 to fly over Hiroshima at 32,000 feet (9,800 m) had been the weather observation aircraft Straight Flush that sent a morse code message to the Enola Gay indicating that the weather was good over the primary target and because it then turned out to sea the 'all clear' was sounded in the city. At 08.09 Colonel Tibbets started his bomb run and handed control over to his bombardier.
The release at 08:15 (Hiroshima time) went as planned, and the gravity bomb known as "Little Boy", a gun-type fission weapon with 60 kilograms (130 lb) of uranium-235, took 43 seconds to fall from the aircraft flying at 31,060 feet (9,470 m)[29] to the predetermined detonation height about 1,900 feet (580 m) above the city. The Enola Gay had traveled 11.5 miles away before it felt the shock waves from the blast.[30]
Due to crosswind, it missed the aiming point, the Aioi Bridge, by almost 800 feet (240 m) and detonated directly over Shima Surgical Clinic.[31] It created a blast equivalent to about 13 kilotons of TNT (54 TJ). (The U-235 weapon was considered very inefficient, with only 1.38% of its material fissioning.)[32] The radius of total destruction was about one mile (1.6 km), with resulting fires across 4.4 square miles (11 km2).[33] Americans estimated that 4.7 square miles (12 km2) of the city were destroyed. Japanese officials determined that 69% of Hiroshima's buildings were destroyed and another 6–7% damaged.
70,000–80,000 people, or some 30%[35] of the population of Hiroshima were killed immediately, and another 70,000 injured.[36] Over 90% of the doctors and 93% of the nurses in Hiroshima were killed or injured—most had been in the downtown area which received the greatest damage.
Although the U.S. had previously dropped leaflets warning civilians of air raids on 35 Japanese cities, including Hiroshima and Nagasaki,[38] the residents of Hiroshima were given no notice of the atomic bomb.
POSITIVE thinking MAy GOOD for Our Skin
Have you noticed that some older adults continue to feel good and stay active well into their senior years, while others appear to age rapidly and experience increased health problems? Positive thinking may play a significant role.
Research published in Psychology and Aging, a journal from the American Psychological Association (APA), shows that while genetics and overall physical health play a part in how people age, positive thinking can also play an important role.
According to an APA news release, researchers found a link between positive emotions and the onset of frailty in 1,558 initially non-frail older Mexican Americans living in five southwestern states. This was the first study to examine frailty and the protective role of positive thinking in the largest minority population in the United States.
How Was the Study Conducted?
Study authors Glenn Ostir, Ph.D., Kenneth Ottenbacher, Ph.D., and Kyriakos Markides, Ph.D., from the University of Texas Medical Branch at Galveston, followed older adults for seven years to study their level of positive thinking in relation to their level of frailty.
Frailty was assessed by measuring:
Weight loss
Exhaustion
Walking speed
Grip strength
The study says that positive emotions (or positive thinking) were measured by asking how often in the past week participants had the following thoughts:
“I felt that I was just as good as other people”
“I felt hopeful about the future”
“I was happy”
“I enjoyed life”
There’s a Link Between Positive Thinking and Frailty
The report said that the incidence of frailty in the older adult participants increased overall nearly eight percent during the seven-year follow-up period, but people who scored high on positive affect or positive thinking were significantly less likely to become frail.
While researchers in the study couldn't explain why positive thinking or positive emotions reduced the incidence of frailty, they speculated that positive thinking may directly affect health via chemical and neural responses that help maintain an overall health balance.
Another possibility, according to the researchers, is that positive thinking can have a beneficial effect on people’s health by increasing a person’s intellectual, physical, psychological and social resources.
You Have a Choice About How You Think
I read somewhere that people can only hold one thought at a time. If that’s true, then you have a choice:
Focusing on a thought that makes you feel bad
or
Focusing on a thought that makes you feel good
Try to focus your energy on positive thinking rather than negative thinking, and look for reasons to feel happy and hopeful every day. If you put your energy toward positive thinking and ways to make your life more enjoyable, you may discover that positive thinking really does help you feel better.
political condition of Nepal
The UN-OHCHR, in response to events in Nepal, set up a monitoring program in 2005 to assess and observe the human rights situation there
On 22 November 2005, the Seven Party Alliance (SPA) of parliamentary parties and the Communist Party of Nepal (Maoist) agreed on a historic and unprecedented 12-point memorandum of understanding (MOU) for peace and democracy. Nepalese from various walks of life and the international community regarded the MOU as an appropriate political response to the crisis that was developing in Nepal. Against the backdrop of the historical sufferings of the Nepalese people and the enormous human cost of the last ten years of violent conflict, the MOU, which proposes a peaceful transition through an elected constituent assembly, created an acceptable formula for a united movement for democracy. As per the 12-point MOU, the SPA called for a protest movement, and the Communist Party of Nepal (Maoist) supported it. This led to a countrywide uprising called the Loktantra Andolan that started in April 2006. All political forces including civil society and professional organizations actively galvanized the people. This resulted in massive and spontaneous demonstrations and rallies held across Nepal against King Gyanendra's autocratic rule.
The people's participation was so broad, momentous and pervasive that the king feared being overthrown.[citation needed] On 21 April 2006, King Gyanendra declared that "power would be returned to the people". This had little effect on the people, who continued to occupy the streets of Kathmandu and other towns, openly defying the daytime curfew. Finally King Gyanendra announced the reinstatement the House of Representatives, thereby conceding one of the major demands of the SPA, at midnight on 24 April 2006. Following this action the coalition of political forces decided to call off the protests.
Twenty-one people died and thousands were injured during the 19 days of protests.[citation needed]
On 19 May 2006, the parliament assumed total legislative power and gave executive power to the Government of Nepal (previously known as His Majesty's Government). Names of many institutions (including the army) were stripped of the "royal" adjective and the Raj Parishad (a council of the King's advisers) was abolished, with his duties assigned to the Parliament itself. The activities of the King became subject to parliamentary scrutiny and the King's properties were subjected to taxation. Moreover, Nepal was declared a secular state abrogating the previous status of a Hindu Kingdom. However, most of the changes have, as yet, not been implemented. On 19 July 2006, the prime minister, G. P. Koirala, sent a letter to the United Nations announcing the intention of the Nepalese government to hold elections to a constituent assembly by April 2007.
facebook history
Mark Zuckerberg wrote Facemash, the predecessor to Facebook, on October 28, 2003, while attending Harvard as a sophomore. According to The Harvard Crimson, the site was comparable to Hot or Not, and "used photos compiled from the online facebooks of nine houses, placing two next to each other at a time and asking users to choose the 'hotter' person".[12][13]
Mark Zuckerberg co-created Facebook in his Harvard dorm room.
To accomplish this, Zuckerberg hacked into the protected areas of Harvard's computer network and copied the houses' private dormitory ID images. Harvard at that time did not have a student "facebook" (a directory with photos and basic information). Facemash attracted 450 visitors and 22,000 photo-views in its first four hours online.[12][14]
The site was quickly forwarded to several campus group list-servers, but was shut down a few days later by the Harvard administration. Zuckerberg was charged by the administration with breach of security, violating copyrights, and violating individual privacy, and faced expulsion. Ultimately, however, the charges were dropped.[15] Zuckerberg expanded on this initial project that semester by creating a social study tool ahead of an art history final, by uploading 500 Augustan images to a website, with one image per page along with a comment section.[14] He opened the site up to his classmates, and people started sharing their notes.
The following semester, Zuckerberg began writing code for a new website in January 2004. He was inspired, he said, by an editorial in The Harvard Crimson about the Facemash incident.[16] On February 4, 2004, Zuckerberg launched "Thefacebook", originally located at thefacebook.com.[17]
Six days after the site launched, three Harvard seniors, Cameron Winklevoss, Tyler Winklevoss, and Divya Narendra, accused Zuckerberg of intentionally misleading them into believing he would help them build a social network called HarvardConnection.com, while he was instead using their ideas to build a competing product.[18] The three complained to the Harvard Crimson, and the newspaper began an investigation. The three later filed a lawsuit against Zuckerberg, subsequently settling.[19]
Membership was initially restricted to students of Harvard College, and within the first month, more than half the undergraduate population at Harvard was registered on the service.[20] Eduardo Saverin (business aspects), Dustin Moskovitz (programmer), Andrew McCollum (graphic artist), and Chris Hughes soon joined Zuckerberg to help promote the website. In March 2004, Facebook expanded to Stanford, Columbia, and Yale.[21] It soon opened to the other Ivy League schools, Boston University, New York University, MIT, and gradually most universities in Canada and the United States.[22][23]
Facebook incorporated in the summer of 2004, and the entrepreneur Sean Parker, who had been informally advising Zuckerberg, became the company's president.[24] In June 2004, Facebook moved its base of operations to Palo Alto, California.[21] It received its first investment later that month from PayPal co-founder Peter Thiel.[25] The company dropped The from its name after purchasing the domain name facebook.com in 2005 for $200,000.[26]
Total active users[N 1]Date Users
(in millions) Days later Monthly growth[N 2]
August 26, 2008 100[27] 1,665 178.38%
April 8, 2009 200[28] 225 13.33%
September 15, 2009 300[29] 150 10%
February 5, 2010 400[30] 143 6.99%
July 21, 2010 500[31] 166 4.52%
January 5, 2011 600[32][N 3] 168 3.57%
Facebook launched a high school version in September 2005, which Zuckerberg called the next logical step.[33] At that time, high-school networks required an invitation to join.[34] Facebook later expanded membership eligibility to employees of several companies, including Apple Inc. and Microsoft.[35] Facebook was then opened on September 26, 2006, to everyone of age 13 and older with a valid email address.[36][37]
On October 24, 2007, Microsoft announced that it had purchased a 1.6% share of Facebook for $240 million, giving Facebook a total implied value of around $15 billion.[38] Microsoft's purchase included rights to place international ads on Facebook.[39] In October 2008, Facebook announced that it would set up its international headquarters in Dublin, Ireland.[40] In September 2009, Facebook said that it had turned cash flow positive for the first time.[41] In November 2010, based on SecondMarket Inc., an exchange for shares of privately held companies, Facebook's value was $41 billion (slightly surpassing eBay's) and it became the third-largest US web company after Google and Amazon.[42] Facebook has been identified as a possible candidate for an IPO by 2013.[43]
Traffic to Facebook increased steadily after 2009. More people visited Facebook than Google for the week ending March 13, 2010.[44] Facebook also became the top social network across eight individual markets—in Australia, the Philippines, Indonesia, Malaysia, Singapore, New Zealand, Hong Kong and Vietnam, while other brands commanded the top positions in certain markets, including Google-owned Orkut in India, Mixi.jp in Japan, CyWorld in South Korea, and Yahoo!'s Wretch.cc in Taiwan.[citation needed]
In March 2011 it was reported that Facebook removes approximately 20,000 profiles from the site every day for various infractions, including spam, inappropriate content and underage use, as part of its efforts to boost cyber security.