Thursday, 1 December 2016

medical

The Energy Balance of Running


During the two years of its monthly appearance, this column has looked at many objects—cars, turbines, airplanes, windows, mobile phones, and nuclear reactors—made by humans. Today’s focus is on the human body, specifically the way it keeps itself cool.
Before the development of long-range projectile weaponry some tens of thousands of years ago, in Africa, our ancestors had only two ways to secure meat: by scavenging the leftovers of mightier beasts or by running down their own prey. Humans were able to occupy the second of those ecological niches thanks, in part, to two great advantages of bipedalism.
The first advantage is in how we breathe. A quadruped can take only a single breath per locomotive cycle because its thorax must absorb the impact on the front limbs. We, however, can choose other ratios, and that lets us use energy more flexibly. The second, and greater, advantage is in our extraordinary ability to regulate our body temperature, which allows us to do what lions cannot: to run long and hard in the noonday sun.
It all comes down to sweating. The two large animals we have mainly used for transport perspire profusely, compared to other quadrupeds: In one hour a horse can lose about 100 grams of water per square meter of skin, and a camel can lose up to 250 g/m2. However, a human being can easily shed 500 g/m2, enough to remove 550 to 600 watts’ worth of heat. Peak hourly sweating rates can surpass 2 kilograms per square meter, and the highest reported short-term sweating rate is twice that high.
We are the superstars of sweating, and we need to be. An amateur running the marathon at a slow pace will consume energy at the rate of 700-800 W, and an experienced marathoner who covers the 42.2 kilometers in 2.5 hours will metabolize at the rate of about 1,300 W.
And we have another advantage when we lose water: We don’t have to make up the deficit instantly. Humans can tolerate considerable temporary dehydration providing that they make up the deficit within a day or so. In fact, the best marathon runners drink only about 200 milliliters per hour during the race.
Together these advantages allowed our ancestors to become the unrivaled diurnal, high-temperature predator. They could not outsprint an antelope, of course, but during a hot day they could dog its heels until it finally collapsed, exhausted.
Documented cases of such long-distance chases come from three continents and include some of the fleetest quadrupeds. In North America, the Tarahumara of northwestern Mexico could outrun deer. Further north, Paiutes and Navajos could exhaust pronghorn antelopes. In South Africa, Kalahari Basarwa ran down a variety of antelopes (mostly duikers, gemsbok, and kudus but also larger eland) and during the dry season even wildebeests and zebras. In Australia, some Aborigines would outrun kangaroos.
These runners even had an advantage over modern runners using expensive athletic shoes: Their barefoot running not only reduced their energy costs by about 4 percent (a nontrivial advantage on long runs) but it also exposed them to fewer acute ankle and lower-leg injuries.
In the race of life, we humans are neither the fastest nor the most efficient. But we are certainly the most persistent.

medical

Brain Scans to Distinguish Between Brain Injury and PTSD

The symptoms of post-traumatic stress disorder and traumatic brain injury overlap in a dizzying blur of similarity: depression, difficulty concentrating, anxiety, fatigue, loss of interest, and more.
Mild traumatic brain injury (mTBI)—the result of the head being hit or violently shaken—and PTSD plague returning veterans and affect civilians: Each year, an estimated 8 million adults have PTSD and 1.3 million Americans sustain a mild brain injury.
Currently, there is no screening tool or instrument to reliably diagnose either condition. Instead, one’s best chance for an accurate diagnosis is an interview with a skilled physician. Due to the subjective nature of that process, many cases of each condition go undetected or misdiagnosed.
Now, a team at Cambridge, Mass.-based Draper, a not-for-profit research and development company, Harvard Medical School, and Brigham and Women’s Hospital is developing a non-invasive test to provide a straightforward diagnosis for either condition (or both). “It’s essentially a non-invasive biopsy looking at the chemical constituents of the brain, and trying to use that to make a diagnosis,” says John Irvine, Draper’s chief data scientist.
At the Center for Clinical Spectroscopy at Brigham and Women’s, Alexander Lin oversees the brain scans of veterans who have experienced a trauma, as well as healthy veterans and civilians as controls. Each participant lies down in a magnetic resonance imaging (MRI) machine. A typical MRI scan uses powerful magnetic fields to produce spatial images of the brain or other organs. Lin instead applies a different protocol, called magnetic resonance spectroscopy (MRS), to use the magnetic fields to detect levels of molecules in the brain called metabolites. His technicians scan four regions of each participant’s head, sections of approximately five cubic centimeters each, to capture a slew of raw data on the chemical composition of the brain.
Next, Irvine and his team apply a series of algorithms to clean and process that data, which is often noisy with weak chemical signals. The most abundant chemical in the brain is water, so the first step is to remove that signal from the data. After additional data processing, the team fits a set of wavelets (mathematical functions used to process signal information) to the processed MRS signals. These wavelets allow the team to identify unique metabolite peaks that may have been hidden in the original data heap.
In an initial test of the “virtual biopsy” in 18 patients, the team identified two potential biomarkers: Patients with mTBI had lower levels of the metabolites N-acetyl aspartate and creatine than PTSD patients. Irvine emphasizes that the results are preliminary.
“I’m reluctant to put too much credence into what we discovered from just a handful of patients,” says Irvine. “We’re excited about getting more data and seeing if these relationships hold up.”
To date, the team has scanned 75 individuals and plans to expand the study to several hundred over the next year in collaboration with additional hospitals. Irvine suspects that it will be more than just one or two metabolites in the brain that distinguish each condition, but a suite of metabolites that make up a chemical signature associated with each condition. If the researchers can discover and validate that signature, returning veterans could get the brain scan as part of routine care, hopefully for the same price as a traditional MRI scan.
The combination of MRS and the data analysis tools could also be useful in other brain conditions involving changing levels of metabolites in the brain. Draper has a second project using the technique to study chronic traumatic encephalopathy, the progressive degenerative brain disease affecting football players and other athletes who experience repetitive brain trauma.
“It’s a widely applicable tool,” says Irvine. “As we have more experience with it and collect more data, we’re going to find other medical conditions where it’s going to be useful.”


Technology

IBM Making Silicon to Sort Viruses and Other Nanoscale Biological Targets

It’s long been understood that early disease detection is the key to successful treatments. But annual checkups with a doctor might not be frequent enough to help. So imagine if you could forego a trip to the doctor’s office and detect any disease with a simple urine or saliva test at home.
Of course this has been the aim of lab-on-a-chip technologies for years now, but now scientists at IBM Research may have tipped the scales in the technology that could make such at-home tests real.
In cross-disciplinary research described in the journal Nature Nanotechnology, a team at IBM led by research scientist Joshua Smith and Gustavo Stolovitzky, program director of IBM Translational Systems Biology and Nanobiotechnology, has been able to retool silicon-based technologies to create a diagnostic device that can separate viruses, DNA, and other nanoscale-size biological targets from saliva or urine. This could enable the device to detect the presence of diseases before any physical symptoms are visible.
Of course, the separation of nanoscale particles has been possible for years in various forms, such as ultracentrifugation, gel electrophoresis,  chromatography, or filtration. These approaches all come with compromises: Centrifugation and chromatography can be very precise but require expensive machinery and trained technicians; gels and filter media are cheap and easy to use but are less precise and more difficult to recover samples from. 
The basis of the IBM team’s approach is something called deterministic lateral displacement (DLD) separation technology, which was first developed in 2004. DLD is a microfluidic process that uses the laminar flow of fluid through a field of tiny posts to separate particles based on size. In this array of pillars, the separation occurs because the smaller particles are moving in the direction of the fluid, while the larger particles get deflected along the direction of pillar asymmetry. This sorting makes it possible to isolate, detect, and analyze the particles downstream.
With nanoscale DLD, the researchers can take a liquid sample and pass it through a silicon chip especially designed with an array of asymmetric pillars. This pillar array separates particles by size down to a resolution within tens of nanometers. 
To test their technology, the researchers targeted exosomes, which are 30- to 100-nanometer structures containing protein and genetic material that are present in many if not all biological fluids and that may play a role in blood coagulation, among other things. Exosomes serve as biomarkers for detecting cancers and other diseases.


medical

Doctors Still Struggle to Make the Most of Computer-Aided Diagnosis

As a timer counted down, a team of physicians from St. Michael’s Medical Center in Newark, N.J., conferred on a medical diagnosis question. Then another. And another. With each question, the stakes at Doctor’s Dilemma, an annual competition held in May in Washington, D.C., grew higher. By the end, the team had wrestled with 45 conditions, symptoms, or treatments. They defeated 50 teams to win the 2016 Osler Cup.
The stakes are even higher for real-life diagnoses, where doctors always face time pressure. That is why researchers have tried since the 1960s to supplement doctors’ memory and decision-making skills with computer-based diagnostic aids. In 2012, for example, IBM pitted a version of its Jeopardy!-winning artificial intelligence, Watson, against questions from Doctor’s Dilemma. But Big Blue’s brainiac couldn’t replicate the overwhelming success it had against human Jeopardy! players.
The trouble is, computerized diagnosis aids do not yet measure up to the performance of human doctors, according to several recent studies. Nor can makers of such software seem to agree on a single benchmark by which to measure performance. Using reports on such software in the peer-reviewed literature, one team of researchers found wide performance variations across different diseases, as well as different usage patterns among doctors. For example, younger doctors are likelier to spend time putting more patient data into a tool and likelier to benefit from the aid. Two presentations at the 6–8 November Diagnostic Error in Medicine Conference in Hollywood, Calif., confronted the issue of how to realistically incorporate technological aids into doctor training and hectic diagnosis routines.
Another issue is figuring out how to compare different software aids. “If you look at, for example, the big progress that has occurred in speech recognition or in image classification, it's really been brought about by having really good benchmark data sets and really like having actual competitions,” says computer scientist Ole Winther at the Technical University of Denmark in Lyngby. “We don't have the same in the medical domain.”
While IBM did publish a report in 2013 on its Watson-vs-Doctor’s Dilemma test, Winther says that he has been unable to obtain the subset of questions IBM used, so he was unable to directly compare it to a diagnostic aid he and colleagues built, called FindZebra. Last year, his team estimated that both FindZebra and Watson list the correct diagnosis among their top 10 results about 60 percent of the time, which is in line with what a Spanish team reported earlier this year.
Despite the lack of a unified benchmark for computer-aided diagnostics, individual doctorsfamily members of misdiagnosed patients, and academicand clinical groups have built and are marketing such aids. Clients include private health insurance companies and research hospitals around the world–among them, a pair of medical facilities in North Carolina and Japan that have reported some success diagnosing patients with Watson. Yet, at a recent IBM Research event in Zurich, one of IBM’s clients, Jens-Peter Neumann of the Rhön-Klinikum hospital network in Germany, said that it is too early to estimate the potential cost savings of his team’s Watson collaboration.
In February 2016 the Rhön-Klinikum network began pilot-testing Watson against the ultimate challenge for any diagnostics aid: rare diseases. The 7,000 or so known rare diseases affect perhaps 7 percent of Europe’s population, according to Munich Re, an insurance and risk management firm. As genomic screening grows more sophisticated, insurer Munich Re predicts the discovery of over 1,000 more diseases by 2020. “Memorizing them all is just not going to happen,” says computer scientist and physician Tobias Mueller of the University Clinic Marburg in Germany, who is involved in the Rhön-Klinikum pilot.
Instead the team is structuring the natural-language medical histories of the 522 patients in the pilot into the right format for Watson, a time-consuming process that combines human and computer efforts. Watson can then compare these structured histories to the medical literature and suggest ranked diagnoses. 
One issue, Mueller says, has been consistently processing medical literature from both German and English. So far, the team have opted to use a combination of medical taxonomies, such as MedDRA and ICD10, to describe symptoms and diagnoses. He also notes that sometimes the knowledge sources fed into Watson contradict each other. In other words: computerized diagnosis aids are struggling with some of the same problems humans do when sharing and comparing information. “However, this reflects the diversity of the knowledge base of Watson and is no different than having a room full of doctors with different backgrounds and different opinions. It's more a strength, than a weakness,” Mueller says.
Despite the struggles, Winther says computer-aided diagnosis will ultimately mature: “A lot of patients spend years and years juggling between [general practitioners] and the wrong specialists. That’s still a challenge where there’s room for these kinds of tools.”

medical


Cheap, Rugged, Sweat-Sensing Skin Patch Hints at Bloodless Testing

Sweat could be the next thing wearable devices sense to track your health, researchers say. A new microfluidic skin patch capable of collecting and analyzing sweat has survived tests that included a grueling 104-kilometer bike race. And the next-generation wearable device has attracted the attention of companies such as cosmetics giant L’Oreal and a major sports beverage maker—not to mention the U.S. military. It could even pave the way for a painfree, bloodless method of prescreening people for diabetes in the future, according to its inventors.
The flexible sweat sensor collects sweat in a tiny tubing system as it’s worn against the skin. Different sections of the sensor slowly change color as they react to different levels of certain chemicals found within sweat. Any smartphone with the right app can take a picture of the sweat sensor to automatically interpret the color changes and biochemistry of the sweat as certain health signs. To tests the device’s ruggedness, volunteers even wore the sweat sensor during a long-distance, outdoor bicycling race.


Technews

The internet of things: a way small firms can use             it 


Fred Dabney couldn’t sleep at night. The owner of Quansett Nurseries in South Dartmouth, Mass., didn’t know how much water was left in the wells he depends on to irrigate the 10 acres he farms. If his wells ran dry, Mr. Dabney would be out of business.
“Now, I just hit a button on the computer in my office. I can see exactly how much water I have in each well, and it’s tested every few hours,” says the tanned farmer with a trimmed white beard. “Much to my surprise and delight, I discovered I had a heck of a lot more water than I thought I did, which made me breathe a lot easier.”
The sensors that Wellntel, which makes groundwater monitors, installed in two of Dabney’s wells are part of a program he and dozens of other farmers, fishermen, and small-business owners in southeast Massachusetts have joined. They're outfitting and troubleshooting the internet of things (IoT) in the dirt, diesel, and seawater of the real world.

Wednesday, 30 November 2016

Science and Technology

What is HTML?

 

HTML is the standard markup language for creating Web pages.
  • HTML stands for Hyper Text Markup Language
  • HTML describes the structure of Web pages using markup
  • HTML elements are the building blocks of HTML pages
  • HTML elements are represented by tags
  • HTML tags label pieces of content such as "heading", "paragraph", "table", and so on
  • Browsers do not display the HTML tags, but use them to render the content of the page

A Simple HTML Document

 <!DOCTYPE html>
<html>
<head>
<title>Page Title</title>
</head>
<body>

<h1>My First Heading</h1>
<p>My first paragraph.</p>

</body>

</html>

  •  Example Explained
  • The <!DOCTYPE html> declaration defines this document to be HTML5
  • The <html> element is the root element of an HTML page
  • The <head> element contains meta information about the document
  • The <title> element specifies a title for the document
  • The <body> element contains the visible page content
  • The <h1> element defines a large heading
  • The <p> element defines a paragraph
  • HTML Tags

    HTML tags are element names surrounded by angle brackets:
    <tagname>content goes here...</tagname>

    • HTML tags normally come in pairs like <p> and </p>
    • The first tag in a pair is the start tag, the second tag is the end tag
    • The end tag is written like the start tag, but with a forward slash inserted before the tag name