Re: FPGA for PCI based servo control board



At 12:18 AM 4/6/03 -0500, John Sheahan wrote:

>On Sat, Apr 05, 2003 at 11:25:40PM -0500, John Kasunich wrote:
> >
> > At 07:27 PM 4/5/03 -0500, John Sheahan wrote:
> >
> > >how come one scheme here always transmits max size data,
> > >and one transmits a delta?  All schemes could run either way.
> >
> > I don't understand what you are saying.  One scheme sends 256
> > bytes.  The other scheme sends only the bytes needed by the
> > specific machine, which depends on the number of axes, etc.
> > For any given machine, the number of bytes in each packet is
> > always the same, and all data is updated every servo cycle.
> > No deltas.  One scheme is simpler to implement but slower,
> > the other is faster but more complicated.  I don't care which
> > one is used, but obviously the software in the PC and the
> > micro or state machine at the other end of the link have to be
> > using the same scheme.
>
>Sorry - I was particularly unclear.
>
>I see (at least) two interfaces and (at least) two ways of
>sending the data.
>Either send all data , of a delta.
>Either scheme can be used on either interface.
>Doing transfer-time- sums for the max data at the slowest
>serial interface rate seemed unreasonable.

I like to look at the worst case situation.  I agree that 100Mb/s
is lots more common on PC NICs today.  But the embedded
ethernet modules are probably cheaper at 10Mb/s.  Especially
if you want to put the MAC in the FPGA - I really don't want
to pay for a 100MHz+ FGPA if a slower one will do the job.

>Sending all data takes longer, but is more robust to packet loss.

Exactly.  IMHO, so much more robust that I wouldn't even consider
sending deltas.  Again, in the worst case, all bits could change
state, and you'd have to send it all anyway.  The only thing I was
looking at is whether to send 24, or 32, or however many bytes
a specific machine needs, or to send 256 bytes on all machines
to support any possible configuration.

> > The PC to FPGA interface can be ISA or EPP for low cost,
> > PCI for performance, or ethernet to let the FPGA board be
> > closer to the machine.  The user makes the choice depending
> > on what he thinks is most important.
>
>minor quibble. gigabit ethernet is as fast as pci.

Yeah, I should have said ethernet for speed _and_ remote location.
Compared to ISA or EPP, Ethernet will also probably have less
CPU load in the PC as well, since the accessing a PCI NIC avoids
ISA or EPP wait states.

That said, the unique strong point about the ethernet version is the
ability to remotely mount it.

BTW, I think gigabit ethernet is really unreasonable for an FPGA
MAC on a homebrew board. ;-)

> > The FPGA to device interface common to all four designs, and
> > I/O boards can be used with any FPGA board.  I/O boards are
> > designed to mix and match I/O to keep cost down.  For a one
> > off project, the I/O board could be custom made on perf-board
> > or wire-wrap, using published schematics.  That's much harder
> > to do with an FPGA, especially if your gonna connect it to a
> > 33MHz PCI bus!
>
>agreed on PCI.  not a givem for a PLCC on eth I think.
>Could even put a fpga on a couple-inch square smd that plugs in
>to the perf board - like many of the current usb-style interfaces.

Building the FPGA board as a daughterboard is definitely cheaper
than using two connectors and a cable.  I was hoping that I/O
board glass could be used with all versions of the FPGA board.
Creative PCB layout might allow the option of installing the cable
connector or a daughterboard connector on the same bare board.
I am definitely in favor of anything that reduces cost.

> > Complexity is in the eye of the beholder.  Yes, the parts count is
> > low.  But that's like saying a 2003 PC motherboard is less complex
> > than a 1985 one because there are fewer parts on it.  In my view,
> > complexity = design time, not parts count.  The FPGA is complex,
> > the I/O interface is simple.  But the interface costs as much or
> > more than the FPGA, because terminal blocks, relays, optocouplers,
> > and such are expensive.
> >
>
>I think this shows our relative experience bases.

Exactly.  There are two kinds of engineers.  One assumes that the
stuff outside his area of expertise is easy.  You know, the hardware
guy who says "so what if the interface to my chip is weird, let the
software guys code around it".  On the other hand are ones like me,
who assume that if I can't do it, it must be really hard and should be
avoided.  My attitude comes from home projects, where if I can't do
it it really should be avoided.  This project is different, we have
multiple people contributing their talents.

>  I'd see the fpga as
>a couple of days to get the rtl simulating. It would take me longer
>than that to decide on the IO drivers, sort out footprints and draw
>the board.

My experience with FPGAs (long ago) was that defining the
design is the easy part.  Placing and routing are worst, and then
you still have to lay out the PCB.  Still, I understand what you are
saying, and I've gotta yield to your experience in this area.

> > >Whats the model for the connection between this card and the target?
> > >some kind of wide, high-density connector?
> >
> > Yes.  I was thinking SCSI-3 cables (68 pins), but there are other
> > possibilities.
> >
>I find HD SCSI cables relatively expensive, short  and fragile.
>And the sockets are expensive in small qty.

All true.  With the FGPA board inside the PC, something along those
lines is inevitable, though.  The EPP and ethernet versions could use
a daughterboard.  The more I think about that idea, the more I like it.

> > >The IO card is still required.
> >
> > Yes.  You will always have an I/O card, unless you intend to take
> > the limit switch leads, spindle on/off, coolant pump, and all the
> > other wires directly to the back of the PC slot.
>
>sure. I'd just add the interface electronics at that point.

I think we are in agreement, or very close.

> > >> 4)  Ethernet - big FPGA, containing an ethernet MAC and the
> > >> control hardware.  The packet format is as I described above.
> > >> The PC NIC ID (eth0, etc.) and the FPGA MAC address are
> > >> entered into the EMC ini file.
> > >>
> > >
> > >10 or 20 k gates I'd assume, but not a particularly high pin count.
> > >probably plcc (hence socketable, hence buildable by most)
> > >On the same board as all the IO buffers, physically located close to
> > >the machine.  My picture here is just a cheap longish cable.
> >
> > You could put it all on one board if you want.  But when you lay out
> > that board, you better know what I/O mix you need.  And be prepared
> > to re-do the layout if you change your mix.  I/O and FPGA on two
> > boards lets the I/O change without affecting the FPGA board.  If you
> > have a medium to high volume application with a fixed set of I/O, by
> > all means put everything on one board.
> >
>
>perhaps that daughter board I mentioned would be useful here.

Yes.  The daughterboard approach works for ethernet and parallel port.
It would be really cool if we could find a cable connector and a board
stacking (daughterboard) connector with the same PC footprint.  Then
the same I/O glass could be used to build a cable connected I/O board
for ISA/PCI, and a motherboard for an ethernet or EPP daughterboard.
I'll look into it.

> > What I'd really like to do is define two things:
> >
> > 1)  A set of registers by which EMC communicates with the control
> > hardware part of the FPGA.
> >
> > 2)  A set of pins by which the control hardware part of the FPGA
> > communicates with the outside world of encoders, switches, motors, etc.
> >
> > If we had these two things, then several parts of the design could
> > begin to make progress:
> >
>
>I'm happy to design the ethernet-mac to pins of an fpga.
>I've got a Xilinx eval board uncommited it should be testable on.

Great!  For you, the registers between the MAC and the control
hardware are internal, but I'd still like to have them defined so
the other designs will remain compatible.

I've got a few hours today.  I'm gonna start a web page with
design documentation.  Doing this stuff in words only is very
frustrating.

John, are you gonna be at NAMES or EMC Monday?  I'd love
to sit down with you, Craig, and some paper for a couple of
hours.  We could accomplish a lot.

John Kasunich





Date Index | Thread Index | Back to archive index | Back to Mailing List Page

Problems or questions? Contact