Re: Hardware Abstraction Layer - Rev 0.01







I wrote:

>>
>> Digital values are either TRUE or FALSE.  A "bit" data
>> type is typedef'ed for digital data:
>>
>> typedef enum { FALSE = 0, TRUE = 1 } bit;
>>
>> (C++ defines a "bool" type, but I'm not sure it is
>> supported in C.  If it is, it would be better than
>> creating a custom type.)

and Will replied:

> I recommend using "char" for boolean's.
>
> A C++ bool will be 1 byte but an enum as you
> have above will be 4 bytes.  But the type char is
> completely unambigous. Defining your own version of
> TRUE, FALSE and bool is something I see too many
> programs do.  The problem is that there is no
> way too know your definition won't conflict with
> someone elses that is included by some header file
> perhaps very indirectly. Even if you successfully
> compile you can never be certain that some future
> version of a header file you don't even care about
> will redefine it. I always just use 0 and 1 directly
> in my code and have given up trying to alias these
> to FALSE and TRUE.

I guess this is a case of conflicting requirements.
Defining a type improves code readability which is
a good thing.  But if somebody's include file
breaks the code, that is a bad thing.  Most of my
programming has been in a solo or small team
environment, where I haven't had to worry about
other includes, so the improved readibility and
type checking was a definite plus.

Which leads to a question about typechecking.
I was operating on the (wrong?) assumption that
the following would cause at least a warning:

typedef int typeA;
typedef int typeB;

typeA a;
typeB b;

a = b;  /* should cause a warning, different types */

I looked a little deeper, and at least as far as
C is concerned, "a = b;" is legal.  Maybe I had
some extra compiler warnings enabled when I used
it.  I'm pretty sure lint would give a warning.

I want the warning.  In my mind, the reason for
typedefs is to prevent things like this, or at
least force me to write a = (typeA)b; to indicate
that I know I am mixing types.

I agree with you that TRUE and FALSE are at
risk of being re-defined somewhere else.

How about a compromise?  Use 0 and 1 instead of
TRUE and FALSE, and use "typedef char HAL_BIT;"
for variables that represent digital inputs and
outputs.  Keeping the typedef for HAL_BIT lets
us do typechecking (with lint, if not with gcc),
and HAL_BIT is unlikely to be redefined in another
include file.

>> "bit" values occupy at least one byte - no packing
>> is done at the HAL level.  This prevents problems
>> when two threads try to modify different bits in
>> the same byte.

This is the real reason I mentioned a separate type
for digital I/O bits.  As long as each bit is stored
as a separate entity, it is OK.  Whether we typedef
the entity or not is basically a style issue.

>>  bit    Clamp;    /* output:  true means axis should be clamped */
>>
> Does this mean clamp velocity, voltage, or position? Either way
> would we need some max value to clamp it to?

This refers to a physical clamp that mechanically locks the axis.
Not all machines have them, but if they are present, the high-level
code has to unclamp before moving an axis, and might want to clamp
any axes that are not being moved.  The clamp/unclamp decision is
currently made in the bridgeportio task, not in emcmot, but the
I/O bit to activate the clamp should be associated with the axis.

> The INI file program almost definitely has to be read in a
> user space app.  This means either each each HAL implementation
> comes with two components a real time driver and a corresponding
> non realtime app that reads and parses the INI file and sends
> data appropriate for that HAL down to realtime app via shared
> memory or fifo.  Or we always copy the entire INI file down
> and have each HAL parse the INI data now loaded into memory.
> My preference would probably be for the first option since the
> less code that runs in realtime where debugging is harder and
> mistakes are more likely to halt your system the better.

Here is where I get to display my ignorance of the linux module
architecture...  As I understand it, real-time code is part
of a kernel module.  When that module is insmod'ed, a function
"init_module()" is called inside that module.

Is init_module() running at the user level, and simply making
calls to the realtime kernel to install the real-time code?
Or is init_module itself running as a real-time/kernel function?

If init_module() is a user mode function, then it should handle
the ini file.  If not, then I agree that a user mode helper will
be needed.

I've been thinking about initialization and configuration ever
since I posted my HAL rev 0.01 message.  I've got some ideas that
I want to post for discussion, and a number of questions that may
simply show how little I know.  I don't have time to write it all
up right now, but I'll try to post tonight.

John Kasunich






Date Index | Thread Index | Back to archive index | Back to Mailing List Page

Problems or questions? Contact