Friday, January 30, 2015

Flu and death - a short introduction.

I think the issue surrounding H3N2 here, is whether the current state of affairs represent the logical outcome, given contemporary scientific knowledge on influenza virology and epidemiology. I believe that, in the current form, it is probably correct that reinforcement of education in personal hygiene and mask-wearing behavior be done, and citizens be taught of the vulnerable group of people towards these viruses (the extremes of age, those with chronic illness, organ failure and so on.)

I would like to disclaim first, that by profession I am not a microbiologist nor have I undergone any sort of post-graduate training in microbiology. So here, I approach the problem as an amateur, a person with interest in science.

Influenza and H3N2

The influenza A viruses are classified into subtypes according to surface glycoproteins: haemagluttinin, and neuraminidase. Currently, there are 18 H subtypes, and 11 N subtypes, and these are largely found infecting other animals. In the past 100 years, though, there are only three influenza A virus subtypes that occured in human, H1N1, H2N2, and H3N2 (IIRC). Most epidemics occurred when an originally non-human type virus changes enough to settle on the human respiratory tract; the most important element here is that it needs to stick properly to, and stay on the respiratory tract. 

The ability to stick properly is dependent on the virus having "receptor binding sites" on the surface and the ability for it to bind to human-type alpha-2,6-linked sialic acid (as opposed to alpha-2,3-linked sialic acid) containing receptors, located on the surface of the upper respiratory tract. Specific mutations in the receptor binding sites are nowadays used to delineate or predict the ability of a virus strain to bind on human upper respiratory tract.

It has been seen that influenza viruses, and especially for strains of subtype A(H3N2), have continual evolutionary changes (which we call "antigenic drift" because it is a continual, slow, and gradual process). It is not difficult to understand why this change is forced upon: the human adaptive immune system is very good at combating an infection if it is known to the system; and moreover, we now use vaccines to make sure that viruses with old antigen won't work anymore. So there is a clear selection pressure on the influenza virus. And for the record, after forty-odd years of history, A(H3N2) is still alive and well.

I honestly do not think that this A(H3N2) is so different from previous ones that infected many of us before - you can probably find some distinct genetic features, and you can also probably see a statistical significant increase in death and infection, even after vaccine failures has been controlled for -- after all, vaccine is only a contributing factor, and the immune system as a whole is more important, but I think the difference is a quantitative one not a qualitative one. 

It is notable, though the reason being unclear, strains of A(H3N2) are known to cause increased mortality (compared with other subtypes of flu viruses such as H1N1). The comment quoted in the article saying that the virus targets younger people probably reflects increased admission of young people into ICU, which implies that the virulence (ability to infect and kill) of the virus has been higher than before.

Mortality

I think one should always look at mortality number with the background of those who died in mind. The thing is, who is at risk. For example, while SARS killed 299 in 2003, in the same year, tuberculosis killed 275, and even in 2014, there has been 184 tuberculosis death. So is tuberculosis approximately 92% as severe as SARS in 2003? I don't think so, and I'd assume that you won't believe in that stupid percentage trick as well. YES - tuberculosis is a big problem in Hong Kong but it isn't as much a problem as SARS.

I guess the determinant here is, whether it is something that leads to a lot of death in previously healthy people. If it does, it is a big problem - something should be done immediately to stop its spread. But if it only targets certain group of people, well, then, the problem is easier to tackle, because there aren't as many people who are at risk. And then, another factor plays in - does the infection have a period of time in which the infected person is not quite unwell, but already quite infectious?

Influenza is typically infectious around 1 day before, and up to 4-7 days after initial symptom appear. and to make it better, if you wear a mask in the public area, and keep it off at work, chances are, you probably won't get infected by it.

The bottom line

Influenza A(H3N2), in its current form, had a much lower mortality rate than other dreaded respiratory infections, but obviously is worse off compared with previous years.

There aren't much easy ways of preventing it other than asking people to do their personal hygiene properly and using the vaccine  (despite the inefficacy - 20% protection is still way better than no protection). Using the mask is good, but is probably limited to those with reasonable civil education/moral sense, which, lately become a rare occurrence in Hong Kong. 

Monday, April 7, 2014

Utilitarian aspect of learning disorders

Looking into the classification of learning disorders we can see a huge inequality between learners who had specific learning disorders. What I am saying is not that people who are dyslexic are put into more difficult situation compared with the "normal" crowd, but rather that people with a less recognized learning disorder such as dyscalculia are essentially labeled as "retarded" or "stupid" in general, whereas people with a more recognized disorder (even though it is less well-defined) such as dyslexia receive a lot of aid (in fact more so that they sometimes fare better than an average student)... when in fact that they both have deficit in one and only one domain of learning. This, sir, is extremely unfair.


I believe that dyslexia receive so much attention because number one, the alleles related to it probably co-segregates with a lot of high-fitness (as in, evolutionarily, intelligent, beautiful, sociable, and so on) alleles -- and parents of these children tended to be well-off, better educated, and at a better position to fight for resources for their children. and number two, dyslexia is interesting to researchers because these children fails only in one aspect of language development, and there are knowledge to be found in this area[1].

I believe that, care for these developmental disorders would improve if we could define a model in which a child can be scored, in different dimensions, normal dimensions, abnormal dimensions. For example, a child with dyslexia would have a perfectly normal (or even superior) performance dimension, whereas his verbal dimension would be two standard deviations lower than the performance dimension. On the other hand, the social and verbal dimension would necessarily be worse in a person with autism, and a person with attention-deficit and hyperactivity disorder would have an attention dimension two standard deviations lower than his average ability in all other dimensions.

With such a classification, we can identify patients whose deficit is in a single dimension which may be readily treatable, and with a central registry, this can enable researchers to recruit subjects with a particular deficit to search for a treatment plan which is best suited to the children in question.

To add to the benefit, the scheme would also allow children whose disability is generalized, in which specific treatment or specific diagnosis is not helpful. 

What do you think?

[1] I am sure as hell researchers are more interested in "lesion patients" than a completely debilitated person - just look at the "Broca" patient, the "Wernicke" patient, and so on in the neurology ward, and the stroke patient beside them. You'll see the difference. The lesion patient got interviewed by 100+ medical students and junior doctors whereas the stroke patient struggle to even get somebody speaking to her.

Sunday, March 30, 2014

Water

So, what is the requirement of water at home?

I come from pathology background -- naturally, I would look at water from a laboratory perspective. What is required for laboratory reagent water? In a laboratory, we are looking at several aspects:

(1) Ionic impurities (e.g. >10 megaohm/cm3)
(2) Organic impurities (e.g. <500 ppb)
(3) Microbiological impurities (e.g. <10 cfu/ml), and
(4) Particulate content (e.g. passing through a 0.22 um absolute screen filter)

The first one is almost an absolute requirement, that laboratory reagent water contain as little undesired ionic contamination as possible, as it will interfere with trace analysis (or if the contamination is severe, non-trace analysis -- that would be gross). The impedence of water is used as a way of measuring ionic contamination. Water has a theoretical maximum impedence of 18.2 megaohms -- and when exposed to air, the highest impedence would drop to around 1 megaohm. Water is purified to remove ionic contaminants mostly by the use of ion-exchange resins. To improve the efficiency, often multiple resins are used in cascade. 

The second one refers to the amount of organic compounds dissolved in the water. Organic contaminants can be removed by oxidation by UV light, as well as absorption and adsorption by activated charcoal.

Microbiological impurities are often removed by depth filter, and chlorination of the tap water.

Particulate content are usually removed in multiple stages: firstly, depth filter consisting of fibers interwoven to trap particulates in a random manner is used; depth filter is a high-capacity, but non-absolute filter. In contrast to depth filter, a screen filter (absolute filter) is used at the end of the water purification process to obtain an absolute cutoff. However, screen filters are low capacity with regard to contaminants, and thus a depth filter in front of it would greatly increase its lifetime.

So, for human consumption, what is the role of water treatment?

Briefly, water from the water service department in Hong Kong is considered potable - i.e. harmful constituents are at a very low level when the water leaves WSD. What is the caveat? It's the route between WSD and your home. What could go wrong?

Plastic pipe can leak organics. Old pipes (if any) could leak heavy metal ions. Bacteria can grow in the water reservoir on the top of the building. Iron oxide fragments can go into the water, etc. Another problem is the presence of chlorine in the water, which cause a change of taste.

To be honest though, none of this is significant to the extent that it is harmful to drink from the tap. And to make it clear, boiling probably does not help (other than perhaps evaporating some volatile organics). And thus, water treatment system at home is probably helpful - Take a chance, make a cup of tea with tap water, and a cup with distilled water from a reputable manufacturer - it WILL taste different. Why? - The organics, the chlorine content, hardness -- all make differences to the taste.

Thus, in my opinion, water treatment at home is mostly aesthetic (taste-enhancement) and psychological (feeling a little bit safer with regard to heavy metal which is essentially non-existent in HK).

For my home, I consider a depth filter, followed by ion exchange resins (to remove cationic [heavy metal] impurities), and activated charcoal filter to be adequate for health. In fact, for well-maintained housing establishments, this shouldn't even be necessary - except charcoal filter perhaps. Charcoal filter enhances the taste of water a lot.

There is probably little need to use elaborate deionization, osmosis or so on for home water.

Monday, March 17, 2014

Two types of oil misunderstood: Canola and coconut

Oil Facts:
1. Canola oil is not considered harmful from a toxicological aspect. In fact it's quite healthy!
2. Coconut oil, on the other hand, is not so good for your health.

Information for those who want to read a little bit more:

Canola oil
- Produced from a cultivar of "rapeseed" which is with low erucic acid content (The mustard plant)
- Erucic acid, for those who are unfamiliar with it, is a naturally-occuring omega-9 fatty acid, with a chain length of 22 (denoted as C22:1 omega-9), has not been reported to be associated with human toxicity. It appears to be toxic to mice[1].
- Current cultivars of rapeseed contain only trace quantity of erucic acid (less than 0.1%).
- Other purported toxins in rapeseed includes glucoinolates, which are said to be goitrogens (resulting in a goitre, toxic to the thyroid gland). These are at very low levels in modern rapeseed plants, which contain less than 10 umol/g, compared with broccoli, a common plant containing glucoinolates in the order of 10 mmol/g (dry weight), a 1000x difference. If you can eat broccoli, glucoinolates in canola oil is probably OK.
- It is low in saturated fat, and contain a favorable profile of omega-6 and omega-3 fatty acids.
- It has a fairly high smoke point (204 C), which is about 10 degree higher than cooking (non-extra virgin) olive oil.

Coconut oil
- It contain 91% saturated fat, the highest of all natural oils (excluding those that are hydrogenated). Taking saturated fats *cannot* be very good for your health.
- It does contain a blend of medium chain fatty acids, some of which are good (and some of which are bad) for the heart. One should also note that the famous "heart-killer" BUTTER also contain some 10% of myristic acid, which are said to be a health-promoting component of coconut oil.
- Yes, it's probably healthier than butter, but probably less healthy than even the evil canola oil mentioned above.

Some general facts of oil-making
- Other than cold-pressed oil, most oil produced nowadays include a step of solvent extraction using organic solvents such as hexane, absolute alcohol, and so on. Hexane contamination in oil is usually insignificant. Hexane is probably the least of your worries in your daily diet if you asked me. The air you breath is probably more harmful than the hexane in oil if you live in Hong Kong, given the recent pm2.5 readings.

[1] Chocolate is toxic to dogs. There are a lot of stuff that are toxic to one species and not to other. If you look into your mice biology textbook, you will get to know that they do not digest vegetable fat as well as a lot of other animals.

Sunday, March 16, 2014

A cancer diagnosed at a late stage

A recent post[1] in "名校 Secrets" surrounds a case of Stage III nasopharyngeal cancer occurring in a middle-aged lady. Following an admittedly flamebait reply, there was a lot of discussion regarding to the role of doctor in the diagnosis of early cancers.



When faced with a diagnosis of "late stage cancer", people would always look back, and try to find a circumstance when he or she mentioned her first symptom to a doctor, and try to blame them for not doing their job -- what they don't realize is that, signs and symptoms of early-stage cancers, with few exceptions, are non-specific, non-sensitive, mimics a thousand other diseases, and in general, not showing up as a "red flag" in most's eyes. And for the matter, most early-stage cancers are NOT symptomatic.


If cancers were so easy to diagnose at a early stage, we wouldn't have a problem of cancer, we won't need thousands of researchers worldwide to combat cancer, and we most certainly do not need a billion-dollar pharmaceutical industry.


[1] https://www.facebook.com/schools.secrets/posts/297387690422653

Friday, March 14, 2014

Surgical mask, N95 mask...

A response to the following article: http://www.vjmedia.com.hk/articles/2014/03/14/66267

With the current evidence, surgical mask is non-inferior to N95 respirators in preventing influenza infection, the basic reason for its existence in modern healthcare -- preventing droplet-transmitted infections[1]. It should be noted that there are multiple reason behind wearing surgical masks, such as preventing the wearer's droplets from contaminating the operating field, or decreasing the amount of droplets emitted during e.g. sneezing.
On the other hand, while there are a lot of Taiwanese and Japanese people wearing cloth mask. While there are not a lot studies on this matter, Rengasamy et al provided some evidence that it doesn't really work[2]. It's not good enough trapping anything from influenza to tuberculosis.

While we all know that we can buy N95 masks and simple surgical mask, the need for fit-testing for the use of N95 masks are often forgotten: one need to have a tight-fitting N95 masks to enjoy the benefit of a respirator: how is it different from a mask? A major difference lies in that a surgical mask filters aims at filtering only a portion of air taken by the person whereas a respirator aims at filtering all, i.e. practically all gas that you inspire went through the N95 if you are wearing it correctly.

Let me repeat it: you need to do a fit-testing to determine which N95 model fits you the best. If an N95 is not fitting, you are not enjoying any of its benefits. A simple rule is, if it feels easy wearing it, either you are working in the industry, or you are wearing it wrong.

There are some studies suggesting that N95 fit-test is not necessary[3]. However, this has not been published in a peer-reviewed journal, thus, without further studies, it is probably wise to continue doing fit-tests. A rule of thumb for those who may not have access to fit-testing is, 1860 for gentlemen, and 1860s for ladies. Your mileage may vary.

[1] JAMA. 2009 Nov 4;302(17):1865-71
[2] Ann Occup Hyg. 2010 Oct;54(7):789-98.
[3] ICAAC 2009; Oral session K-1918b

Thursday, March 13, 2014

Hard disk...

Question

I'm about to purchase a SATA hard drive. I was wondering if, aside from the storage capacity, are there any other factors I should keep in mind? I care above all about reliability. Is a more expensive drive less error-prone than a low-end one?

Answer

Consider reading Pinheiro et al (2007) Failure Trends in a Large Disk Drive Population. Proceedings of the 5th USENIX Conference on File and Storage Technologies, Feb 2007. Just search the title on google.

In general, drives of the same manufacturer are made to the same specification in terms of the disk assembly. It is usually the tolerances that differ. To give an example, if you want a paper circle of 5cm diameter, a circle of 4.5 or 5.5 cm maybe acceptable for one use (e.g. home use for decoration of child's room) but a circle of 5.0cm, add or subtract 1mm, (i.e. within 4.9 - 5.1 cm) would be required if it is a decoration project for a project launch for some big, big company.

For example, the load/unload cycle specification of a home drive may be ~ 300,000 times, the load/unload specification of an enterprise drive would be ~ 600,000 times, doubling the figures. The tighter specification also applies to the drive assembly and the disk manufacturing process - and thus the non-recoverable read error rate would be much smaller for enterprise drives, for example, a typical, current home drive - Caviar Black (from Western Digital) would have a nonrecoverable read error per 10^14 bit read. Compare with a typical harddrive manufactured towards datacenter servers WD RE SAS, which would have a nonrecoverable read error per 10^15 bit read. Whether that 10 times more reliability matters to you, is another matter.

To be honest, how you use the drive, is likely more important than which drive you use. Below is a summary of google's findings:
  • 6-7% of the drives fail within first year of use. Within which, more than half of these failing drive will fail within 6 month. These drives tends to be utilized highly during these periods.
  • Failure of the drive follow a double peak model. The first peak is within 3 months, and the second peak is around 3 years.
  • After the first year, there is in general a 8% failure rate of harddrive annually.
  • The effect of temperature is twofold: [1] The lowest failure rate is seen at disks run around 40 degree C. [2] As the drive ages, the failure rate rises expoentially with temperature at third year. To interpret this statement, running the drive at ~35C would achieve the best compromise of longevity and early failures, and if your harddrive can be replaced every 2 years, running the drives as hot as 45C in general would in fact decrease the failure rate, but past the second year there will be an exponential increase if you run it at 45C.
  • If you use SMART reporting software (a nice one is Crystal Disk Info URL: http://crystalmark.info/software/CrystalDiskInfo/index-e.html ), if you see one scan error, 10% will fail within days, and 30% of the drive will fail within 6 months. Thus, backup and discard the drive accordingly after you see the first one. If you see a reallocation event, 10% will fail within ~4 months. Note, however that only 60% of all harddrive failures would be predicted by SMART system.
MTBF

Mean time between failures is basically not very useful for the typical consumers. The Mean time between failure is usually ideal and theoretical. Let's say we have 500,000 drives with MTBF of 500,000 hours - if you run each and every of them together you will likely to have one of them failing every hour, statistically speaking, if you run them within their specification (temperature, humidity, power supply quality...) With reference to the google study, the realistic useful life of a harddrive would be more like 2 years (in a non-redundant system) or 3 years (in a redundant system) - if you use it 24 hours a day - In a redundant system (e.g. a RAID-[5,6]) you can lose a harddrive without losing data. Particularly, in RAID 6 you can lose a harddrive and still have redundancy during the rebuild process.

Service life

One often see some manufacturer quoting service life such as '5 years' and then offering you a warranty of '3 years'. Translation: "We believe that it should last some 5 years. If it fails within the first three years of use, we'll replace it at our cost, but if you have it failed between 3rd and 5th year, poor you. It certainly won't be the case that we have installed some sort of time bomb to make them unusable by its fifth birthday, but you should get a new harddisk and use instead of this 5-year-old harddrive if your data is any precious."