Return-Path: Sender: (Marvin Kaye) To: lml Date: Mon, 21 Jul 2003 13:56:36 -0400 Message-ID: X-Original-Return-Path: Received: from wind.imbris.com ([216.18.130.7] verified) by logan.com (CommuniGate Pro SMTP 4.1b9) with ESMTP id 2480939 for lml@lancaironline.net; Mon, 21 Jul 2003 13:48:34 -0400 Received: from starband.net (cda131-116.imbris.com [216.18.131.116]) by wind.imbris.com (8.11.6/8.11.6) with ESMTP id h6LHoue09479 for ; Mon, 21 Jul 2003 10:50:57 -0700 (PDT) X-Original-Message-ID: <3F1C2864.9090803@starband.net> X-Original-Date: Mon, 21 Jul 2003 10:52:36 -0700 From: "Hamid A. Wasti" User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.0; en-US; rv:1.0.2) Gecko/20030208 Netscape/7.02 X-Accept-Language: en-us, en MIME-Version: 1.0 X-Original-To: Lancair Mailing List Subject: Re: [LML] Re: Essential Buss versus Fuel Endurance References: Content-Type: multipart/alternative; boundary="------------000508050305010301030004" --------------000508050305010301030004 Content-Type: text/plain; charset=us-ascii; format=flowed Content-Transfer-Encoding: 7bit I apologize that I do not have time to delve into this discussion at the depth that some others have gotten into. As a result, I am going to keep the answers brief and skip a number of issues that could use an answer in the long post directed at me. Sorry, but that is the reality of my work load. In my opinion, the best advise in this whole thread was offered by Jeff: hardware is nice, but it will invariably fail. The best safety item is your brain and your judgment, make sure you keep it sharp so it does not fail you. I too have done fool hardy and sometimes even downright stupid things in my younger days and luck paid a big part in my still being around. Now in my older and supposedly wiser days, I try and rely more on allegedly improved judgment and less on luck. If I can dissuade others from relying on their luck, I would say I have done good. Shannon Knoepflein wrote: > Why would you not know what has went wrong? > You certainly can. All you need is enough sensors, enough controls and enough displays and you can monitor EVERYTHING. However, the reality is that by the time you add enough sensors and monitoring circuits, sensors and monitoring circuits to monitor those circuits, you have added more failure modes. What is worse, you have potentially inundated yourself with so much data that you may not be able to perform your primary responsibility: Fly the plane. The approach you have taken is more along the lines of what a transport category jet has. That is great, but can we really afford the size and weight penalty in our small airplanes? Also, in case you have not noticed, there is always a second person in the airliner's cockpit flying the plane while the pilot (pilot-not-flying actually) thumbs through a several inches thick manual trying to diagnose the problem. We do not have that luxury either. Talking about transport jets and their monitoring systems, some may recall an accident in England in the late 80's or early 90's . I believe it was in Manchester. It was twin engine jet, with the engines mounted on the tail and not visible from the cockpit. A DC-9 or something similar looking. On takeoff roll the engine fire light came on. The pilots successfully aborted the takeoff, turned off the offending engine, fired the fire extinguisher bottle and continued to taxi on the other engine, taxied off the runway and positioned the aircraft relative to the prevailing wind to minimize the spread of flames. The problem was that the fire warning lights were mis-wired, so they shut of the good engine and continued to run the engine that was on fire. Several people died as the fire spread to the fuselage and ended up being much worse than it would have been if the correct engine had been shut down. I am not saying that the pilots should have shut off both engines right away, I am merely pointing out that the more systems you add, the more of a possibility exists that you can have a failure of the monitoring system. > What evidence do you have of this sort of resistive failure? Being a > EE myself, I understand the concept, but have never seen it in > practice, especially in a solid state device. > Transorbs can fail in this manner. There is a transorb on the power line of every certified piece of electrical hardware in your cockpit and most probably on most uncertified ones as well. I have seen both power MOSFETs and power transistors fail in this manner after a catastrophic failure. The primary failure causes the lead frame (the mechanical part that holds the actual semiconductor die) to melt and cause its pins to short with a small resistance. The semiconductor itself can fail with a restive short as well. > This is one of the big drawbacks of most sytems, unless you have a way > to monitor alternator current (which fortunately I do). > See my point above about too much monitoring making the system less reliable and reducing safety by overloading you with data at a critical time. What is "too much" and what is "not enough" is a personal decision where I have a feeling that Shannon and I will never agree. We can both present our points of view and let the readers decide what works for them. There is no one right answer and one person being right does not make the other person wrong. Hamid --------------000508050305010301030004 Content-Type: text/html; charset=us-ascii Content-Transfer-Encoding: 7bit I apologize that I do not have time to delve into this discussion at the depth that some others have gotten into.  As a result, I am going to keep the answers brief and skip a number of issues that could use an answer in the long post directed at me.  Sorry, but that is the reality of my work load.

In my opinion, the best advise in this whole thread was offered by Jeff: hardware is nice, but it will invariably fail.  The best safety item is your brain and your judgment, make sure you keep it sharp so it does not fail you.  I too have done fool hardy and sometimes even downright stupid things in my younger days and luck paid a big part in my still being around.  Now in my older and supposedly wiser days, I try and rely more on allegedly improved judgment and less on luck.  If I can dissuade others from relying on their luck, I would say I have done good.

Shannon Knoepflein wrote:

Why would you not know what has went wrong?

You certainly can.  All you need is enough sensors, enough controls and enough displays and you can monitor EVERYTHING.  However, the reality is that by the time you add enough sensors and monitoring circuits, sensors and monitoring circuits to monitor those circuits, you have added more failure modes.  What is worse, you have potentially inundated yourself with so much data that you may not be able to perform your primary responsibility: Fly the plane.

The approach you have taken is more along the lines of what a transport category jet has.  That is great, but can we really afford the size and weight penalty in our small airplanes?  Also, in case you have not noticed, there is always a second person in the airliner's cockpit flying the plane while the pilot (pilot-not-flying actually) thumbs through a several inches thick manual trying to diagnose the problem.  We do not have that luxury either.

Talking about transport jets and their monitoring systems, some may recall an accident in England in the late 80's or early 90's .  I believe it was in Manchester.  It was twin engine jet, with the engines mounted on the tail and not visible from the cockpit.  A DC-9 or something similar looking.  On takeoff roll the engine fire light came on.  The pilots successfully aborted the takeoff, turned off the offending engine, fired the fire extinguisher bottle and continued to taxi on the other engine, taxied off the runway and positioned the aircraft relative to the prevailing wind to minimize the spread of flames.  The problem was that the fire warning lights were mis-wired, so they shut of the good engine and continued to run the engine that was on fire.  Several people died as the fire spread to the fuselage and ended up being much worse than it would have been if the correct engine had been shut down.  I am not saying that the pilots should have shut off both engines right away, I am merely pointing out that the more systems you add, the more of a possibility exists that you can have a failure of the monitoring system.

What evidence do you have of this sort of resistive failure?  Being a EE myself, I understand the concept, but have never seen it in practice, especially in a solid state device.

Transorbs can fail in this manner.  There is a transorb on the power line of every certified piece of electrical hardware in your cockpit and most probably on most uncertified ones as well.  I have seen both power MOSFETs and power transistors fail in this manner after a catastrophic failure.  The primary failure causes the lead frame (the mechanical part that holds the actual semiconductor die) to melt and cause its pins to short with a small resistance.  The semiconductor itself can fail with a restive short as well.

This is one of the big drawbacks of most sytems, unless you have a way to monitor alternator current (which fortunately I do).

See my point above about too much monitoring making the system less reliable and reducing safety by overloading you with data at a critical time.  What is "too much" and what is "not enough" is a personal decision where I have a feeling that Shannon and I will never agree.

We can both present our points of view and let the readers decide what works for them.  There is no one right answer and one person being right does not make the other person wrong.

Hamid

--------------000508050305010301030004--