Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 66425

Article: 66425
Subject: Re: Using 3.3V compliant FPGA for 5V PCI
From: gabor@alacron.com (Gabor Szakacs)
Date: 19 Feb 2004 08:59:02 -0800
Links: << >>  << T >>  << A >>
rickman <spamgoeshere4@yahoo.com> wrote in message news:<4033DC6C.E9E8F1F0@yahoo.com>...
> Nicky wrote:
> In a 3.3 volt slot the quick switch conducts without affecting the
> signal appreciably.  A 5 volt signal is limited (clipped) by the Vcc to
> the switch.  The switch is not really doing a voltage translation.  It
> just raises its resistance significantly when the signal voltage gets
> near Vcc.  
It's important to note that the Quick switch must run on 5 volts in
either case, not at Vio.  Internally the quick switch FETs need about
a volt and a half above the highest required output drive voltage.

Article: 66426
Subject: Re: Dual-stack (Forth) processors
From: "Martin Schoeberl" <martin.schoeberl@chello.at>
Date: Thu, 19 Feb 2004 17:07:28 GMT
Links: << >>  << T >>  << A >>
"Sander Vesik" <sander@haldjas.folklore.ee> schrieb im Newsbeitrag
news:1077208019.903795@haldjas.folklore.ee...
> In comp.arch Martin Schoeberl <martin.schoeberl@chello.at> wrote:
> > "Sander Vesik" <sander@haldjas.folklore.ee> schrieb im Newsbeitrag
> > news:1077074083.882919@haldjas.folklore.ee...
> > > In comp.arch Martin Schoeberl <martin.schoeberl@chello.at>
wrote:
> > > > > Is there a community that is actively involved in discussing
> > and/or
> > > > > developing FPGA-based Forth chips, or more generally, stack
> > > > > machines?
> > > > >
> > > >
> > > > Tha Java Virtual Machine is stack based. There are some
projects
> > to
> > > > build a 'real' Java machine. You can find more information
about a
> > > > solution in an FPGA (with VHDL source) at:
> > http://www.jopdesign.com/
> > > >
> > > > It is sucessfully implemented in Altera ACEX 1K50, Cyclone
(EP1C6)
> > and
> > > > Xilinx Spartan2.
> > >
> > > It would be intresting to see results for a version that cached
the
> > > top of the stackand used a more realistic memory interface
> > >
> > Hallo Sander,
> >
> > In this design the stack is cached in a multi level hirarchy:
> >
> > TOS and TOS-1 are implemented as register A and B. The next level
of
> > the stack is local memory that is connected as follows: data in is
> > connected to A and B, the output of the memory to the input of
> > register B.
> > Every arithmetic/logical operation is performed with A and B as
source
> > and A as destination. All load operations (local variables,
internal
> > register, external memory and periphery) result in the value
loaded in
> > A. Therefore no write back pipeline stage is necessary. A is also
the
> > source for store operations. Register B is never accessed
directly. It
> > is read as implicit operand or for stack spill on push
instructions
> > and written during stack spill and fill.
> > This configuration allows following operation in a single pipeline
> > stage:
> >     ALU operation
> >     write back result
> >     fill from or spill to the stack memory
> >
> > The dataflow for a ALU operation is:
> >     A op B => A
> >     stack[sp] => B
> >     sp-1 => sp
> >
> > for a 'load' operation:
> >     data => A
> >     A => B
> >     B => stack[sp+1]
> >     sp+1 => sp
> >
> > An instruction (except nop type) needs either read or write access
to
> > the stack ram. Access to local variables, also residing in the
stack,
> > need simultaneous read and write access. As an example, ld0 loads
the
> > memory word pointed by vp on TOS:
> >     stack[vp+0] => A
> >     A => B,
> >     B => stack[sp+1]
> >     sp+1 => sp
> >
> > This configuration fits perfect to the block rams with one read
and
> > one write port, that are common in FPGAs. A standard RISC CPU
needs
> > three data ports (two read and one write) to implement the
register
> > file in a ram. And usually one more pipeline stage for the ALU
result
> > to avoid adding the memory access time to the ALU delay time. And
for
> > single cycle execution you need a lot of muxes for data
forwarding.
>
> yes, the block rams (with registers implemented in those) make
FPGA-s
> have an interesting tradeoff -
> * you can have a large number of registers with no penalty
> * you are very limited in number of read/write ports, and
>   adding more does not scale *AT ALL*

As for a RISC processor you need two read ports and only one write
port it is easy (with a little waste) to achieve this with the typical
FPAG memories. You have to double the memory and write in both blocks,
than you can use two independent read ports.
But there are still some minor problems: Current block rams downt
allow read during write or unregistered access. This adds more
pipeline stages (with more data forwarding) to the design. With only
stack spill/fill you can calculate the stack address early in the
pipeline and than you don't need extra pipeline stages in a stack
based design.
And with a 'large' on chip stack it can be seen as a very elegant and
simple (no tag rams) data cache.

>
> >
> > As summary: In my opinion a stack architecture is a perfect choice
for
> > the limited hardware resources in an FPGA.
> >
> > About the 'more realistic memory interface':
> >
> > I don't see the problem. The main memory interface is a separate
block
> > and currently there are three different implementations for
different
> > boards: a low cost version with slow 8 bit ram, a 32 bit interface
for
> > fast async. ram and Ed Anuff added a 16 bit interface for the
Xilinx
> > version on a BurchED board. Feel free to implement your interface
of
> > choice (SDRAM,...).
>
> I see - just the benchmark numbers only used the simple 8-bit
interface.

I really should update this web page (It is soo old....)

>
> >
> > Sorry for the long mail, but I could not resist to 'defend' my
design
> > ;-)
> >
> > Martin
> >
> >
>
> --
> Sander
>
> +++ Out of cheese error +++



Article: 66427
Subject: Re: Microblaze instruction timings
From: Goran Bilski <goran@xilinx.com>
Date: Thu, 19 Feb 2004 18:18:55 +0100
Links: << >>  << T >>  << A >>
Hi,

The multicycle instruction always take multiple cycles.
This is due to the pipeline of MicroBlaze.
MicroBlaze has only 3 pipestages, Instruction Fetch (IF), Operand Fetch 
(OF) and Execution Stage (EX)

When a multicycle instruction is executing (is in EX), the next 
instruction is in the OF stage.
The pipeline can't move since the EX stage is occupied.
A more complex handling of the EX stage to allow more than 1 instruction 
at the same time maybe possible but will increase the control complexity 
quite a lot.
All pipeline hazardous are resolved in hardware and an increase in 
complexity might result in a overall lower performance since the clock 
frequency might be lower.

The best way to handle multicycle instruction is to increase the number 
of pipeline stages but that will increase the area.
You will always pay for a higher performance by using more resources.
The current MicroBlaze is a good tradeoff between area and performance.
It's smaller and the same time it's also faster than any other soft 
processor.

The 950 LUT figure includes the basic features no caches or debug.
The caches is quite cheap on LUTs, around 50 LUTs for the instruction cache.
The cost is that BRAM is needed to handle the caches.

Göran
Jon Beniston wrote:

>Hi,
>
>Can anyone (Goran?) fill in some details of the Microblaze's pipeline
>for me? Do multi-cycle instructions always take multiple cycles? For
>example, if a load or shift is followed by an instruction that doesn't
>use the result of the load or shift, will the load or shift still cost
>two cycles? What is the branch penalty?
>
>Also, what does the 950 logic-cell figure include? Does it include the
>caches as well as all of the optional instructions / debug logic?
>
>Cheers,
>JonB
>  
>


Article: 66428
Subject: Re: Simulation MODEL for SRAM
From: Bassman59a@yahoo.com (Andy Peters)
Date: 19 Feb 2004 09:51:10 -0800
Links: << >>  << T >>  << A >>
ALuPin@web.de (ALuPin) wrote in message news:<b8a9a7b0.0402190121.6e44f4c5@posting.google.com>...
> Dear Sir or Madam,
> 
> I want to design an SRAM controller for the asynchronous SRAM
> IDT71V256SA.
> 
> Can somebody tell me if there is such a VHDL simulation model available?

For some reason, every time I look at IDT's web site, there seems to
be fewer models...

Cypress makes equivalent parts, perhaps they have a model?

-a

Article: 66429
Subject: Virtex-II Speed grade -6 exist?
From: ccon <>
Date: Thu, 19 Feb 2004 09:52:43 -0800
Links: << >>  << T >>  << A >>
Hi guys, 
I was informed (don't remember who told me/ or where it came from) 
that the Virtex-II is also available in speed -6, is that true? 

For my understand the Virtex-II family just has -4 and -5 speed grade. Pls clarify me. 



Article: 66430
Subject: Re: Using 3.3V compliant FPGA for 5V PCI
From: rickman <spamgoeshere4@yahoo.com>
Date: Thu, 19 Feb 2004 13:13:45 -0500
Links: << >>  << T >>  << A >>
Gabor Szakacs wrote:
> 
> rickman <spamgoeshere4@yahoo.com> wrote in message news:<4033DC6C.E9E8F1F0@yahoo.com>...
> > Nicky wrote:
> > In a 3.3 volt slot the quick switch conducts without affecting the
> > signal appreciably.  A 5 volt signal is limited (clipped) by the Vcc to
> > the switch.  The switch is not really doing a voltage translation.  It
> > just raises its resistance significantly when the signal voltage gets
> > near Vcc.
> It's important to note that the Quick switch must run on 5 volts in
> either case, not at Vio.  Internally the quick switch FETs need about
> a volt and a half above the highest required output drive voltage.

Actually, I think they need to run off a separate IO voltage.  I belive
there is an app note on one of the manufacturer's web sites showing that
a diode drop is useful to provide this voltage from 5.0 volts.  

-- 

Rick "rickman" Collins

rick.collins@XYarius.com
Ignore the reply address. To email me use the above address with the XY
removed.

Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design      URL http://www.arius.com
4 King Ave                               301-682-7772 Voice
Frederick, MD 21701-3110                 301-682-7666 FAX

Article: 66431
Subject: Re: Can FPGA bootstrap itself?
From: "Martin Euredjian" <0_0_0_0_@pacbell.net>
Date: Thu, 19 Feb 2004 18:23:45 GMT
Links: << >>  << T >>  << A >>
Marius Vollmer wrote:

> Imagine you want to have an FPGA board that has a USB port and no
> other connection (i.e., no other way to upload a bitstream).  Can that
> FPGA bootstrap itself over the USB port?
>
> There would be a 'boot' bitstream in some flash on the board and the
> FPGA would be configured initially with that bitstream.  The function
> of that bitstream would be to make the FPGA listen on the USB port for
> another bitstream that is then used to configure the FPGA for its real
> function.

I don't think you need the boot bitstream at all.  Check out the 245BM chip
from FTDI www.ftdichip.com .  You'd use a few pins to drive the FPGA in
serial configuration mode.  You would have complete control.

Now, if you also want to use USB for non-programming related I/O you'll have
to do a bit more work to deal with the control pins.  This is where
inserting a low cost 8 bit micro in the path might help.  These days you can
do that for just a couple of dollars.


-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Martin Euredjian

To send private email:
0_0_0_0_@pacbell.net
where
"0_0_0_0_"  =  "martineu"



Article: 66432
Subject: Re: Virtex-II Speed grade -6 exist?
From: Austin Lesea <austin@xilinx.com>
Date: Thu, 19 Feb 2004 11:15:47 -0800
Links: << >>  << T >>  << A >>
Yes,

-6 is available for all parts excepting the 2V8000.

Austin



ccon wrote:
> Hi guys,
> 
> I was informed (don't remember who told me/ or where it came from) that 
> the Virtex-II is also available in speed -6, is that true?
> 
> For my understand the Virtex-II family just has -4 and -5 speed grade. 
> Pls clarify me.
> 

Article: 66433
Subject: Re: Dual-stack (Forth) processors
From: jzakiya@mail.com (Jabari Zakiya)
Date: 19 Feb 2004 11:26:22 -0800
Links: << >>  << T >>  << A >>
fox@ultratechnology.com (Jeff Fox) wrote in message news:<4fbeeb5a.0402182345.3a2f3fa0@posting.google.com>...
> jzakiya@mail.com (Jabari Zakiya) wrote in message news:<a6fa4973.0402181055.27856c3@posting.google.com>...
> > Corrections:
> > 
> > The RTX 2000 had two 16-bit 256 element deep stacks (Return & Data), 
> > a 2-4 cycle interrupt response time, and a bit-mutiply instruction which
> > could perform a complete general purpose multiply in 16-cycles. It was
> > rated a 8 MHz (but they could easily run at 10 MHz [which meant it took
> > a 20 MHz clock] at least at room temperatures).
> > 
> > The RTX 2010 had all of the above, plus a one-cycle hardware 16-bit
> > multiply, a one-cycle 16-bit multiply/accumulate, and a one-cycle
> > 32-bit barrel shift. This was the version that Harris/Intersil based
> > the radhard version upon, which NASA and APL (Applied Physics Lab in
> > Columbia, MD) used for its space missions. They both still have a stash
> > left, the last that I heard.
> > 
> > The RTX 2001 was a watered down version which was basically the 2000,
> > but with only 64 element deep stacks. It was intended (according to
> > Harris) to be a cheaper/faster alternative to the 2000, but like the
> > Celeron vs the Pentium, if you can get the real thing at basically the
> > same price, why use the neutered version? Plus, the reduction of stacks
> > from 256 elements to 64 element greatly reduced the ability to do
> > multi-tasking and stack switching.
> > 
> > I used the RTX 2000/2010 extensively when I worked at NASA GSFC
> > Goddard Space Flight Center in Greenbelt, MD) from 1979-1994.
> 
> I had two RTX boards.  One was a rather expensive board six layer
> board with a Meg of SRAM and a shared memory interface to a PC ISA
> bus.  It was from Silicon Composers. The other was one of the cheap
> European Indelko Forthkits, with RTX-cmForth, that I got from Dr.
> Ting.  I had no experience with the 2010.   I didn't remember that
> the 2001 had smaller stacks than the 2000 but I seemed to recall that
> the 2000 had a single cycle multiply and the 2001 had only the
> multiply step instruction.  I no longer have the boards or the
> manuals and I don't think that Dr. Koopman's book goes into the
> details of what made the various models of RTX-20xx different.
> 
> It was a long time ago, so I might have been confused about bit
> level details after all of these years.  I spent a lot more years
> working with P21, I21 and F21 and have a much better memory of
> the bit level details there, it was also more recent.
>   
> > I hope this helps set the history straight with regards to the differences
> > between the RTX versions. Too bad Harris didn't know how to market them.
> > 
> > Jabari Zakiya
> 
> Harris seemed to try first marketing it as Forth chip, then failing
> at that as a good realtime computer for use with C.  I have often 
> heard that it was too bad that they didn't know how to market it
> properly.  Still I don't know if anyone really knows what they
> should-could-would have done to market it more successfully.  They
> simply decided that they could easily market 80C286 that they
> could make on the same fab line.  It also helps date those chips,
> Novix vs 8088, 8086 and RTX vs 80286.  The realtime response,
> fast interrupt handling (relatively) and deterministic timing
> were where they won most easily, but they weren't 'backward
> compatible' with PC software like the Intel compatible chips
> so they were swimming upstream in their marketing efforts.
> 
> Best Wishes

I know it was some time ago now, but only the RTX 2010 had the
hardware
one-cylce mutiply/accumulate, etc instructions. The RTX 2000/1 had the
one-bit 16-cycle multiply/divide instructions. 

Someone recently posted (Jan or Feb 04) the urls to get the pdfs of
the
RTX 2000 manual and RTX 2010 Intersil users guide. I have both of
them.
If you can't find them on c.l.f. let me know and I'll email them to
you.

I also used the Silicon Composer boards, with the RTX 2000s. I used
them primarily to test/compile code for our embedded systems, and to
play around with for other projects. A good thing about the RTX
2000/10
were they were pin-for-pin compatible, and I could hand code the extra
2010 instructions with the SC development software. Piece of cake.

The 2000/10 were way ahead of their times (as was the original Novix).
Not only could you also do 32-to-16 bit squareroots, one-cycle
streaming
memory instructions, partition memory into USER segments which could
be accessed faster than regular ram space, stack partitioning, 3
counter
timers, fast external interrupt response, the clock could be
completely
turned off, and back on again without losing instructions, to run on
as
littel power as you could get away with.

If only somebody like Motorola had taken hold of these chips. With
their
fabrication capabilities, and cpu marketing savy, they couldl have
really
done something with these desigsn. And think about just a 500 MHz RTX
type
chip know. Even then, the RTX performed at mulitple MIPs times its
operating
frequency (because of instructions compression - multiple Forth words
being performed by one RTX instruction)

But of course, I'm just engaging in ex post factor wishful thinking.

Yours

Jabari

Article: 66434
Subject: Is there an easy way to get a list of unused pin in ML300?
From: QiDaNei <qidanei__2@hotmail.com>
Date: Thu, 19 Feb 2004 12:31:30 -0700
Links: << >>  << T >>  << A >>
Hi,
   I am using a Xilinx V2P7 chip on ML300 board, for my design to work I
need to use some spare pins which are not connecting to on board
devices. As the chip has so many pins, I wonder if there is an easy way
to find out which pins are spare? Any implementation tool gives me such
info?

Thanks.


Article: 66435
Subject: Re: Xilinx ISE 4.2 Unisim Block RAM bug?
From: Paulo Dutra <Paulo.Dutra@xilinx.com>
Date: Thu, 19 Feb 2004 11:38:41 -0800
Links: << >>  << T >>  << A >>
http://support.xilinx.com/techdocs/1655.htm


Kevin Brace wrote:

>Hello,
>
>I developed my own synchronous FIFO buffer using Virtex Block RAM.
>However, when I try to simulate it on ModelSim XE 5.5e and 5.7c with ISE
>WebPACK 4.2's Unisim library, I get the following error messages.
>
># ** Error:
>../../../xilinx_webpack/verilog/src/unisims/RAMB4_S16_S16.v(453):
>$recovery( posedge CLKA:690 ns, posedge 
>CLKB &&& clkb_enable:690 ns, 100 ps );
>#    Time: 690 ns  Iteration: 2  Instance:
>/FIFO_Testbench_Top/FIFO_Inst/BRAM_31_16
># ** Error:
>../../../xilinx_webpack/verilog/src/unisims/RAMB4_S16_S16.v(453):
>$recovery( posedge CLKA:690 ns, posedge 
>CLKB &&& clkb_enable:690 ns, 100 ps );
>#    Time: 690 ns  Iteration: 2  Instance:
>/FIFO_Testbench_Top/FIFO_Inst/BRAM_15_0
># ** Error:
>../../../xilinx_webpack/verilog/src/unisims/RAMB4_S16_S16.v(453):
>$recovery( posedge CLKA:720 ns, posedge 
>CLKB &&& clkb_enable:720 ns, 100 ps );
>#    Time: 720 ns  Iteration: 2  Instance:
>/FIFO_Testbench_Top/FIFO_Inst/BRAM_31_16
># ** Error:
>../../../xilinx_webpack/verilog/src/unisims/RAMB4_S16_S16.v(453):
>$recovery( posedge CLKA:720 ns, posedge 
>CLKB &&& clkb_enable:720 ns, 100 ps );
>#    Time: 720 ns  Iteration: 2  Instance:
>/FIFO_Testbench_Top/FIFO_Inst/BRAM_15_0
># ** Error:
>../../../xilinx_webpack/verilog/src/unisims/RAMB4_S16_S16.v(453):
>$recovery( posedge CLKA:750 ns, posedge 
>CLKB &&& clkb_enable:750 ns, 100 ps );
>#    Time: 750 ns  Iteration: 2  Instance:
>/FIFO_Testbench_Top/FIFO_Inst/BRAM_31_16
># ** Error:
>../../../xilinx_webpack/verilog/src/unisims/RAMB4_S16_S16.v(453):
>$recovery( posedge CLKA:750 ns, posedge 
>CLKB &&& clkb_enable:750 ns, 100 ps );
>#    Time: 750 ns  Iteration: 2  Instance:
>/FIFO_Testbench_Top/FIFO_Inst/BRAM_15_0
>                                               .
>                                               .
>                                               .
>                                               .
>                                               .
>
>
>        The above error messages were displayed when the FIFO contains
>one entry, and the user logic tried to do simultaneous read/write of the
>FIFO.
>But if the FIFO contains two or more entries, simultaneous read/write of
>the FIFO doesn't display the above error message, and the FIFO functions
>correctly.
>When the FIFO's RAM (Virtex Block RAM in dual-port mode.) is replaced
>with Verilog's generic RAM, the FIFO functions correctly, so at this
>point I suspect that something is wrong with Unisim's Virtex Block RAM.
>        Next thing I tried was to simulate synchronous version of a FIFO
>buffer (fifoctlr_cc.v) in Xilinx Application Note 175 with the same
>testbench code I used for testing my own FIFO.
>Interestingly, I got error messages very similar to what I got with my
>own FIFO.
>
># ** Error:
>../../../xilinx_webpack/verilog/src/unisims/RAMB4_S8_S8.v(374):
>$recovery( posedge CLKB:690100 ps, 
>posedge CLKA &&& clka_enable:690100 ps, 100 ps );
>#    Time: 660100 ps  Iteration: 1  Instance:
>/fifoctlr_cc_Testbench_Top/fifoctlr_cc_Inst/bram1
># ** Error:
>../../../xilinx_webpack/verilog/src/unisims/RAMB4_S8_S8.v(374):
>$recovery( posedge CLKB:720100 ps, 
>posedge CLKA &&& clka_enable:720100 ps, 100 ps );
>#    Time: 720100 ps  Iteration: 1  Instance:
>/fifoctlr_cc_Testbench_Top/fifoctlr_cc_Inst/bram1
># ** Error:
>../../../xilinx_webpack/verilog/src/unisims/RAMB4_S8_S8.v(374):
>$recovery( posedge CLKB:750100 ps, 
>posedge CLKA &&& clka_enable:750100 ps, 100 ps );
>#    Time: 750100 ps  Iteration: 1  Instance:
>/fifoctlr_cc_Testbench_Top/fifoctlr_cc_Inst/bram1
>                                               .
>                                               .
>                                               .
>                                               .
>                                               .
>
>
>        The application note doesn't say that fifoctlr_cc.v cannot
>handle simultaneous read/write, so I am not sure why these error 
>messages get displayed.
>Eventually I ran out of ideas, so I decided to try ISE 5.1's Unisim
>library, and somehow this time the both FIFOs functioned correctly.
>Am I doing something I am not supposed to do with Virtex's Block RAM, or
>did Unisim's Virtex Block RAM contained a bug until the release of ISE
>5.1 (Virtex was released in 1998)?
>
>
>
>Kevin Brace (If someone wants to respond to what I wrote, I prefer if
>you will do so within the newsgroup.)
>  
>

Article: 66436
Subject: Re: Dual-stack (Forth) processors
From: "Martin Euredjian" <0_0_0_0_@pacbell.net>
Date: Thu, 19 Feb 2004 19:42:47 GMT
Links: << >>  << T >>  << A >>
Jabari,

What's annoying about bottom posting?

Bottom-posting-gone-wild with a full thread quoted in the top of the
message.  Like yours.

At least with top posting I can very quickly click through a bunch of
messages and read the new post.  With bottom-posting-gone-wild you have to
scroll to the bottom of every darn message to find the new text.  Not only a
waste of bandwidth but annoying as hell.  See below for an example.

With top posting, if you are reading through a thread, you can quickly
navigate up/down the thread and read it without further calisthenics.

If you want to bottom post (which is appropriate when you need paragraphs to
be in context, or to address specific statements) at least take the time to
clip and snip the relevant parts of the message you are replying to.


-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Martin Euredjian

To send private email:
0_0_0_0_@pacbell.net
where
"0_0_0_0_"  =  "martineu"




"Jabari Zakiya" <jzakiya@mail.com> wrote in message
news:a6fa4973.0402191126.5e7326ba@posting.google.com...
> fox@ultratechnology.com (Jeff Fox) wrote in message
news:<4fbeeb5a.0402182345.3a2f3fa0@posting.google.com>...
> > jzakiya@mail.com (Jabari Zakiya) wrote in message
news:<a6fa4973.0402181055.27856c3@posting.google.com>...
> > > Corrections:
> > >
> > > The RTX 2000 had two 16-bit 256 element deep stacks (Return & Data),
> > > a 2-4 cycle interrupt response time, and a bit-mutiply instruction
which
> > > could perform a complete general purpose multiply in 16-cycles. It was
> > > rated a 8 MHz (but they could easily run at 10 MHz [which meant it
took
> > > a 20 MHz clock] at least at room temperatures).
> > >
> > > The RTX 2010 had all of the above, plus a one-cycle hardware 16-bit
> > > multiply, a one-cycle 16-bit multiply/accumulate, and a one-cycle
> > > 32-bit barrel shift. This was the version that Harris/Intersil based
> > > the radhard version upon, which NASA and APL (Applied Physics Lab in
> > > Columbia, MD) used for its space missions. They both still have a
stash
> > > left, the last that I heard.
> > >
> > > The RTX 2001 was a watered down version which was basically the 2000,
> > > but with only 64 element deep stacks. It was intended (according to
> > > Harris) to be a cheaper/faster alternative to the 2000, but like the
> > > Celeron vs the Pentium, if you can get the real thing at basically the
> > > same price, why use the neutered version? Plus, the reduction of
stacks
> > > from 256 elements to 64 element greatly reduced the ability to do
> > > multi-tasking and stack switching.
> > >
> > > I used the RTX 2000/2010 extensively when I worked at NASA GSFC
> > > Goddard Space Flight Center in Greenbelt, MD) from 1979-1994.
> >
> > I had two RTX boards.  One was a rather expensive board six layer
> > board with a Meg of SRAM and a shared memory interface to a PC ISA
> > bus.  It was from Silicon Composers. The other was one of the cheap
> > European Indelko Forthkits, with RTX-cmForth, that I got from Dr.
> > Ting.  I had no experience with the 2010.   I didn't remember that
> > the 2001 had smaller stacks than the 2000 but I seemed to recall that
> > the 2000 had a single cycle multiply and the 2001 had only the
> > multiply step instruction.  I no longer have the boards or the
> > manuals and I don't think that Dr. Koopman's book goes into the
> > details of what made the various models of RTX-20xx different.
> >
> > It was a long time ago, so I might have been confused about bit
> > level details after all of these years.  I spent a lot more years
> > working with P21, I21 and F21 and have a much better memory of
> > the bit level details there, it was also more recent.
> >
> > > I hope this helps set the history straight with regards to the
differences
> > > between the RTX versions. Too bad Harris didn't know how to market
them.
> > >
> > > Jabari Zakiya
> >
> > Harris seemed to try first marketing it as Forth chip, then failing
> > at that as a good realtime computer for use with C.  I have often
> > heard that it was too bad that they didn't know how to market it
> > properly.  Still I don't know if anyone really knows what they
> > should-could-would have done to market it more successfully.  They
> > simply decided that they could easily market 80C286 that they
> > could make on the same fab line.  It also helps date those chips,
> > Novix vs 8088, 8086 and RTX vs 80286.  The realtime response,
> > fast interrupt handling (relatively) and deterministic timing
> > were where they won most easily, but they weren't 'backward
> > compatible' with PC software like the Intel compatible chips
> > so they were swimming upstream in their marketing efforts.
> >
> > Best Wishes
>
> I know it was some time ago now, but only the RTX 2010 had the
> hardware
> one-cylce mutiply/accumulate, etc instructions. The RTX 2000/1 had the
> one-bit 16-cycle multiply/divide instructions.
>
> Someone recently posted (Jan or Feb 04) the urls to get the pdfs of
> the
> RTX 2000 manual and RTX 2010 Intersil users guide. I have both of
> them.
> If you can't find them on c.l.f. let me know and I'll email them to
> you.
>
> I also used the Silicon Composer boards, with the RTX 2000s. I used
> them primarily to test/compile code for our embedded systems, and to
> play around with for other projects. A good thing about the RTX
> 2000/10
> were they were pin-for-pin compatible, and I could hand code the extra
> 2010 instructions with the SC development software. Piece of cake.
>
> The 2000/10 were way ahead of their times (as was the original Novix).
> Not only could you also do 32-to-16 bit squareroots, one-cycle
> streaming
> memory instructions, partition memory into USER segments which could
> be accessed faster than regular ram space, stack partitioning, 3
> counter
> timers, fast external interrupt response, the clock could be
> completely
> turned off, and back on again without losing instructions, to run on
> as
> littel power as you could get away with.
>
> If only somebody like Motorola had taken hold of these chips. With
> their
> fabrication capabilities, and cpu marketing savy, they couldl have
> really
> done something with these desigsn. And think about just a 500 MHz RTX
> type
> chip know. Even then, the RTX performed at mulitple MIPs times its
> operating
> frequency (because of instructions compression - multiple Forth words
> being performed by one RTX instruction)
>
> But of course, I'm just engaging in ex post factor wishful thinking.
>
> Yours
>
> Jabari


Imagine if you had to scroll down to here on every new message!!!



Article: 66437
Subject: Re: Dual-stack (Forth) processors
From: Jerry Avins <jya@ieee.org>
Date: Thu, 19 Feb 2004 14:59:26 -0500
Links: << >>  << T >>  << A >>
Martin Euredjian wrote:

> Jabari,
> 
> What's annoying about bottom posting?
> 
> Bottom-posting-gone-wild with a full thread quoted in the top of the
> message.  Like yours.
> 
> At least with top posting I can very quickly click through a bunch of
> messages and read the new post.  With bottom-posting-gone-wild you have to
> scroll to the bottom of every darn message to find the new text.  Not only a
> waste of bandwidth but annoying as hell.  See below for an example.
> 
> With top posting, if you are reading through a thread, you can quickly
> navigate up/down the thread and read it without further calisthenics.
> 
> If you want to bottom post (which is appropriate when you need paragraphs to
> be in context, or to address specific statements) at least take the time to
> clip and snip the relevant parts of the message you are replying to.

When I reply to a top-posted message with a sig -- like yours -- 
everything below the sig is gone, as here. The _)(*&^%$#@!! news program 
snips it all silently.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ


Article: 66438
Subject: Re: Dual-stack (Forth) processors
From: Jerry Avins <jya@ieee.org>
Date: Thu, 19 Feb 2004 15:04:59 -0500
Links: << >>  << T >>  << A >>
Martin Euredjian wrote:

> Jabari,
> 
> What's annoying about bottom posting?
> 
> Bottom-posting-gone-wild with a full thread quoted in the top of the
> message.  Like yours.
> 
> At least with top posting I can very quickly click through a bunch of
> messages and read the new post.  With bottom-posting-gone-wild you have to
> scroll to the bottom of every darn message to find the new text.  Not only a
> waste of bandwidth but annoying as hell.  See below for an example.
> 
> With top posting, if you are reading through a thread, you can quickly
> navigate up/down the thread and read it without further calisthenics.
> 
> If you want to bottom post (which is appropriate when you need paragraphs to
> be in context, or to address specific statements) at least take the time to
> clip and snip the relevant parts of the message you are replying to.
> 
> 

Of course, I can restore what the )(*&^%$#@!! browser snipped by
copying and pasting, but I lose a level of quote indentation. Here's a
hint to ease your pain: <ctrl>+<end> brings you right to the end in many
news readers.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ

-- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Martin Euredjian To send 
private email: 0_0_0_0_@pacbell.net where "0_0_0_0_" = "martineu" 
"Jabari Zakiya" <jzakiya@mail.com> wrote in message 
news:a6fa4973.0402191126.5e7326ba@posting.google.com...

 >> fox@ultratechnology.com (Jeff Fox) wrote in message

news:<4fbeeb5a.0402182345.3a2f3fa0@posting.google.com>...

 >>> > jzakiya@mail.com (Jabari Zakiya) wrote in message

news:<a6fa4973.0402181055.27856c3@posting.google.com>...

 >>>> > > Corrections:
 >>>> > >
 >>>> > > The RTX 2000 had two 16-bit 256 element deep stacks (Return & 
Data),
 >>>> > > a 2-4 cycle interrupt response time, and a bit-mutiply instruction

which

 >>>> > > could perform a complete general purpose multiply in 
16-cycles. It was
 >>>> > > rated a 8 MHz (but they could easily run at 10 MHz [which meant it

took

 >>>> > > a 20 MHz clock] at least at room temperatures).
 >>>> > >
 >>>> > > The RTX 2010 had all of the above, plus a one-cycle hardware 
16-bit
 >>>> > > multiply, a one-cycle 16-bit multiply/accumulate, and a one-cycle
 >>>> > > 32-bit barrel shift. This was the version that Harris/Intersil 
based
 >>>> > > the radhard version upon, which NASA and APL (Applied Physics 
Lab in
 >>>> > > Columbia, MD) used for its space missions. They both still have a

stash

 >>>> > > left, the last that I heard.
 >>>> > >
 >>>> > > The RTX 2001 was a watered down version which was basically 
the 2000,
 >>>> > > but with only 64 element deep stacks. It was intended 
(according to
 >>>> > > Harris) to be a cheaper/faster alternative to the 2000, but 
like the
 >>>> > > Celeron vs the Pentium, if you can get the real thing at 
basically the
 >>>> > > same price, why use the neutered version? Plus, the reduction of

stacks

 >>>> > > from 256 elements to 64 element greatly reduced the ability to do
 >>>> > > multi-tasking and stack switching.
 >>>> > >
 >>>> > > I used the RTX 2000/2010 extensively when I worked at NASA GSFC
 >>>> > > Goddard Space Flight Center in Greenbelt, MD) from 1979-1994.
 >>
 >>> >
 >>> > I had two RTX boards.  One was a rather expensive board six layer
 >>> > board with a Meg of SRAM and a shared memory interface to a PC ISA
 >>> > bus.  It was from Silicon Composers. The other was one of the cheap
 >>> > European Indelko Forthkits, with RTX-cmForth, that I got from Dr.
 >>> > Ting.  I had no experience with the 2010.   I didn't remember that
 >>> > the 2001 had smaller stacks than the 2000 but I seemed to recall that
 >>> > the 2000 had a single cycle multiply and the 2001 had only the
 >>> > multiply step instruction.  I no longer have the boards or the
 >>> > manuals and I don't think that Dr. Koopman's book goes into the
 >>> > details of what made the various models of RTX-20xx different.
 >>> >
 >>> > It was a long time ago, so I might have been confused about bit
 >>> > level details after all of these years.  I spent a lot more years
 >>> > working with P21, I21 and F21 and have a much better memory of
 >>> > the bit level details there, it was also more recent.
 >>> >
 >>
 >>>> > > I hope this helps set the history straight with regards to the

differences

 >>>> > > between the RTX versions. Too bad Harris didn't know how to market

them.

 >>>> > >
 >>>> > > Jabari Zakiya
 >>
 >>> >
 >>> > Harris seemed to try first marketing it as Forth chip, then failing
 >>> > at that as a good realtime computer for use with C.  I have often
 >>> > heard that it was too bad that they didn't know how to market it
 >>> > properly.  Still I don't know if anyone really knows what they
 >>> > should-could-would have done to market it more successfully.  They
 >>> > simply decided that they could easily market 80C286 that they
 >>> > could make on the same fab line.  It also helps date those chips,
 >>> > Novix vs 8088, 8086 and RTX vs 80286.  The realtime response,
 >>> > fast interrupt handling (relatively) and deterministic timing
 >>> > were where they won most easily, but they weren't 'backward
 >>> > compatible' with PC software like the Intel compatible chips
 >>> > so they were swimming upstream in their marketing efforts.
 >>> >
 >>> > Best Wishes
 >
 >>
 >> I know it was some time ago now, but only the RTX 2010 had the
 >> hardware
 >> one-cylce mutiply/accumulate, etc instructions. The RTX 2000/1 had the
 >> one-bit 16-cycle multiply/divide instructions.
 >>
 >> Someone recently posted (Jan or Feb 04) the urls to get the pdfs of
 >> the
 >> RTX 2000 manual and RTX 2010 Intersil users guide. I have both of
 >> them.
 >> If you can't find them on c.l.f. let me know and I'll email them to
 >> you.
 >>
 >> I also used the Silicon Composer boards, with the RTX 2000s. I used
 >> them primarily to test/compile code for our embedded systems, and to
 >> play around with for other projects. A good thing about the RTX
 >> 2000/10
 >> were they were pin-for-pin compatible, and I could hand code the extra
 >> 2010 instructions with the SC development software. Piece of cake.
 >>
 >> The 2000/10 were way ahead of their times (as was the original Novix).
 >> Not only could you also do 32-to-16 bit squareroots, one-cycle
 >> streaming
 >> memory instructions, partition memory into USER segments which could
 >> be accessed faster than regular ram space, stack partitioning, 3
 >> counter
 >> timers, fast external interrupt response, the clock could be
 >> completely
 >> turned off, and back on again without losing instructions, to run on
 >> as
 >> littel power as you could get away with.
 >>
 >> If only somebody like Motorola had taken hold of these chips. With
 >> their
 >> fabrication capabilities, and cpu marketing savy, they couldl have
 >> really
 >> done something with these desigsn. And think about just a 500 MHz RTX
 >> type
 >> chip know. Even then, the RTX performed at mulitple MIPs times its
 >> operating
 >> frequency (because of instructions compression - multiple Forth words
 >> being performed by one RTX instruction)
 >>
 >> But of course, I'm just engaging in ex post factor wishful thinking.
 >>
 >> Yours
 >>
 >> Jabari



Imagine if you had to scroll down to here on every new message!!!


Article: 66439
Subject: Re: Can FPGA bootstrap itself?
From: Uwe Bonnes <bon@elektron.ikp.physik.tu-darmstadt.de>
Date: Thu, 19 Feb 2004 20:19:39 +0000 (UTC)
Links: << >>  << T >>  << A >>
Martin Euredjian <0_0_0_0_@pacbell.net> wrote:
: Marius Vollmer wrote:

: > Imagine you want to have an FPGA board that has a USB port and no
: > other connection (i.e., no other way to upload a bitstream).  Can that
: > FPGA bootstrap itself over the USB port?
: >
: > There would be a 'boot' bitstream in some flash on the board and the
: > FPGA would be configured initially with that bitstream.  The function
: > of that bitstream would be to make the FPGA listen on the USB port for
: > another bitstream that is then used to configure the FPGA for its real
: > function.

: I don't think you need the boot bitstream at all.  Check out the 245BM chip
: from FTDI www.ftdichip.com .  You'd use a few pins to drive the FPGA in
: serial configuration mode.  You would have complete control.

: Now, if you also want to use USB for non-programming related I/O you'll have
: to do a bit more work to deal with the control pins.  This is where
: inserting a low cost 8 bit micro in the path might help.  These days you can
: do that for just a couple of dollars.

The usrp project (www.comsec.com) has code to do so for the Cypress FX2 and
a Cyclone .

Bye
-- 
Uwe Bonnes                bon@elektron.ikp.physik.tu-darmstadt.de

Institut fuer Kernphysik  Schlossgartenstrasse 9  64289 Darmstadt
--------- Tel. 06151 162516 -------- Fax. 06151 164321 ----------

Article: 66440
Subject: Re: Can FPGA bootstrap itself?
From: Jim Granville <no.spam@designtools.co.nz>
Date: Fri, 20 Feb 2004 09:32:03 +1300
Links: << >>  << T >>  << A >>


Martin Euredjian wrote:

> Marius Vollmer wrote:
> 
> 
>>Imagine you want to have an FPGA board that has a USB port and no
>>other connection (i.e., no other way to upload a bitstream).  Can that
>>FPGA bootstrap itself over the USB port?
>>
>>There would be a 'boot' bitstream in some flash on the board and the
>>FPGA would be configured initially with that bitstream.  The function
>>of that bitstream would be to make the FPGA listen on the USB port for
>>another bitstream that is then used to configure the FPGA for its real
>>function.
> 
> 
> I don't think you need the boot bitstream at all.  Check out the 245BM chip
> from FTDI www.ftdichip.com .  You'd use a few pins to drive the FPGA in
> serial configuration mode.  You would have complete control.
> 
> Now, if you also want to use USB for non-programming related I/O you'll have
> to do a bit more work to deal with the control pins.  This is where
> inserting a low cost 8 bit micro in the path might help.  These days you can
> do that for just a couple of dollars.


  The OP did not say if USB boot was ALWAYS the path, or if the FPGA 
needed to come-up with no USB - either as default, or as last-update.

  For those apps that need a uC+USB, Cygnal have a MLP32 package with 
16K Flash :
  It can also bitstream compress into cheaper loader memory, as well as 
wdog, monitor, and other system power save tasks.


http://www.silabs.com/products/microcontroller/usb_matrix.asp

Cygnal also have a device that sounds similar to the FDTI device

http://www.silabs.com/products/microcontroller/interface_matrix.asp

-jg


Article: 66441
Subject: Multiple PicoBlaze/Bus access
From: abduln@gte.net (Abdul Nizar)
Date: 19 Feb 2004 12:32:07 -0800
Links: << >>  << T >>  << A >>
My design has 2 PicoBlaze processors on a Spartan-IIE sharing a common
IO bus (PORT_ID, IN_PORT, OUT_PORT, READ_STROBE, WRITE_STROBE). I am
planning to use simple priority based bus arbitration. Now I am trying
to figure out the minimal changes I need make to the PicoBlaze core in
order for the IO logic to be bus aware. It needs to assert BREQ to
request the bus, wait until BACK, use the bus, then deassert BREQ to
release the bus.

Any suggestions are most appreciated. Any resources I can look at?

Thanks,

- Abdul

Article: 66442
Subject: Re: Dual-stack (Forth) processors
From: fox@ultratechnology.com (Jeff Fox)
Date: 19 Feb 2004 12:58:29 -0800
Links: << >>  << T >>  << A >>
rickman <spamgoeshere4@yahoo.com> wrote in message news:<4034D754.C3D2264@yahoo.com>...
 
> I can't say for sure exactly what kind of marketing would have helped
> the RTX succeed.  But I can tell you that any effort to pit it against
> the x86 line was misdirected.  The x86 parts were not really embedded
> chips and I don't recall them being used as such very often.  My memory
> may be failing me at this since this was long before there were chips
> aimed at the embedded market.  But the Z80 and 8085 would have been the
> main competition for an embedded processor.  The x86 line used too much
> board space and cost too much for most apps.  

I never used the 80186 but I have heard from people who did use
it in embedded work and later the 386e for embedded was clearly
aimed for and used for some embedded work. (though not be me.)
Technically comparing RTX to 8085 or Z80 seemed compelling but
didn't seem to account for much. (Rocket scientists excepted.)
 
> It is likely that Harris did not understand what you do about the
> significant factors in embedded, realtime work.  It has been more than
> once that a vendor needed to educate the engineering community about the
> features that make their products are a better way to go.  

While you might be right, as we can only guess about such things,
I think that since we know that at least Phillip Koopman was
working there that they did have people with deep understanding
of the significant factors in embedded, realtime work.  What I
see as a bigger issue is not if they understood such things but
if their intended customers understood such things.  I suspect
that they didn't and that Harris was not able to educate them
with their marketing effort.  I think that is the real and bigger
problem. It was not that they didn't understand, but that they were up
against and overwhelmed by pervasive marketing information that
was counter to their message.

Perhaps if they had spent billions on marketing to even the
playing field they would-could-should have sold more chips,
but it would have been a big gamble and one that was unlikely
to return sufficient profit to justify it.  Instead they
could just switch the fab line over to 80C286 and ride the
marketing wave created by so many other companies as long
as it lasted.

I often hear that Harris just didn't understand what they
should have done but have yet to hear a reasonable suggestion
of what they would-could-should have done to successfully
market the RTX instead of 286.  I keep asking people and
have yet to have a marketing expert give a good answer.
Maybe there is one and I would like to know it if there is.

Best Wishes

Article: 66443
Subject: Re: Source code for NIOS GNU toolchain
From: "Kenneth Land" <kland1@neuralog1.com1>
Date: Thu, 19 Feb 2004 15:55:40 -0600
Links: << >>  << T >>  << A >>
Didn't work here, but I didn't try very hard.  We used to use gcc in the
early '90's and so I used to rebuild it on Solaris alot back then, but not
since.

From looking through the gnuprotools distribution, all I see is code for the
stdlibs - no gdb or insight or anything too interesting.  Do you see more?

I'm not interested enough to put anymore into it.  It does what I need as
is.

:)
Ken


"Jon Beniston" <jon@beniston.com> wrote in message
news:e87b9ce8.0402190800.2f11b513@posting.google.com...
> "Kenneth Land" <kland1@neuralog1.com1> wrote in message
news:<1038fig29qmvb27@news.supernews.com>...
> > Check again.  I'm downloading a version 3.1 that has todays date on it.
>
> Bizzare. Perhaps they were in the process of updating it when I
> looked. I've now got it, but it fails to build GCC on either Linux or
> CygWin:
>
> ./genattr ../../src/gcc/config/nios/nios.md > tmp-attr.h
> './../src/gcc/config/nios/nios.md:211: unknown rtx code `define_split
> ../../src/gcc/config/nios/nios.md:211: following context is `  [(set
> (match_operand:SI 0 "general_operand" "")'
>
> Did it work for you?
>
> Cheers,
> JonB



Article: 66444
Subject: Re: Dual-stack (Forth) processors
From: Jerry Avins <jya@ieee.org>
Date: Thu, 19 Feb 2004 16:58:18 -0500
Links: << >>  << T >>  << A >>
Jeff Fox wrote:

   ...

> I never used the 80186 but I have heard from people who did use
> it in embedded work and later the 386e for embedded was clearly
> aimed for and used for some embedded work. (though not be me.)
> Technically comparing RTX to 8085 or Z80 seemed compelling but
> didn't seem to account for much. (Rocket scientists excepted.)

   ...

The 80186 was indeed designed for embedded work, including some on-chip 
peripherals. I forget what about it made it awkward to use compares to 
Z-80 and 6809, but I remember feeling that way. Probably my lack of 
familiarity bordering on ignorance.

Jerry
-- 
Engineering is the art of making what you want from things you can get.
ŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻŻ


Article: 66445
Subject: Re: Virtex-II Speed grade -6 exist?
From: ccon <>
Date: Thu, 19 Feb 2004 14:07:17 -0800
Links: << >>  << T >>  << A >>
Hi Austin, 
Thanks for info. But I can't see the -6 option in my old 4.2i fndtn 
software. Does that mean I have to upgrade to latest ISE if I want 
to use those part? Or is there anyway to get around with my favorite SW ??? 



Article: 66446
Subject: Re: Dual-stack (Forth) processors
From: rickman <spamgoeshere4@yahoo.com>
Date: Thu, 19 Feb 2004 17:08:32 -0500
Links: << >>  << T >>  << A >>
Martin Euredjian wrote:
> 
> To send private email:
> 0_0_0_0_@pacbell.net
> where
> "0_0_0_0_"  =  "martineu"
> 
> "Jabari Zakiya" <jzakiya@mail.com> wrote in message
> news:a6fa4973.0402191126.5e7326ba@posting.google.com...
> > fox@ultratechnology.com (Jeff Fox) wrote in message
> news:<4fbeeb5a.0402182345.3a2f3fa0@posting.google.com>...
> > > jzakiya@mail.com (Jabari Zakiya) wrote in message
> news:<a6fa4973.0402181055.27856c3@posting.google.com>...
> > > > Corrections:
> > > >
> > > > The RTX 2000 had two 16-bit 256 element deep stacks (Return & Data),
> > > > a 2-4 cycle interrupt response time, and a bit-mutiply instruction
> which
> > > > could perform a complete general purpose multiply in 16-cycles. It was
> > > > rated a 8 MHz (but they could easily run at 10 MHz [which meant it
> took
> > > > a 20 MHz clock] at least at room temperatures).
> > > >
> > > > The RTX 2010 had all of the above, plus a one-cycle hardware 16-bit
> > > > multiply, a one-cycle 16-bit multiply/accumulate, and a one-cycle
> > > > 32-bit barrel shift. This was the version that Harris/Intersil based
> > > > the radhard version upon, which NASA and APL (Applied Physics Lab in
> > > > Columbia, MD) used for its space missions. They both still have a
> stash
> > > > left, the last that I heard.
> > > >
> > > > The RTX 2001 was a watered down version which was basically the 2000,
> > > > but with only 64 element deep stacks. It was intended (according to
> > > > Harris) to be a cheaper/faster alternative to the 2000, but like the
> > > > Celeron vs the Pentium, if you can get the real thing at basically the
> > > > same price, why use the neutered version? Plus, the reduction of
> stacks
> > > > from 256 elements to 64 element greatly reduced the ability to do
> > > > multi-tasking and stack switching.
> > > >
> > > > I used the RTX 2000/2010 extensively when I worked at NASA GSFC
> > > > Goddard Space Flight Center in Greenbelt, MD) from 1979-1994.
> > >
> > > I had two RTX boards.  One was a rather expensive board six layer
> > > board with a Meg of SRAM and a shared memory interface to a PC ISA
> > > bus.  It was from Silicon Composers. The other was one of the cheap
> > > European Indelko Forthkits, with RTX-cmForth, that I got from Dr.
> > > Ting.  I had no experience with the 2010.   I didn't remember that
> > > the 2001 had smaller stacks than the 2000 but I seemed to recall that
> > > the 2000 had a single cycle multiply and the 2001 had only the
> > > multiply step instruction.  I no longer have the boards or the
> > > manuals and I don't think that Dr. Koopman's book goes into the
> > > details of what made the various models of RTX-20xx different.
> > >
> > > It was a long time ago, so I might have been confused about bit
> > > level details after all of these years.  I spent a lot more years
> > > working with P21, I21 and F21 and have a much better memory of
> > > the bit level details there, it was also more recent.
> > >
> > > > I hope this helps set the history straight with regards to the
> differences
> > > > between the RTX versions. Too bad Harris didn't know how to market
> them.
> > > >
> > > > Jabari Zakiya
> > >
> > > Harris seemed to try first marketing it as Forth chip, then failing
> > > at that as a good realtime computer for use with C.  I have often
> > > heard that it was too bad that they didn't know how to market it
> > > properly.  Still I don't know if anyone really knows what they
> > > should-could-would have done to market it more successfully.  They
> > > simply decided that they could easily market 80C286 that they
> > > could make on the same fab line.  It also helps date those chips,
> > > Novix vs 8088, 8086 and RTX vs 80286.  The realtime response,
> > > fast interrupt handling (relatively) and deterministic timing
> > > were where they won most easily, but they weren't 'backward
> > > compatible' with PC software like the Intel compatible chips
> > > so they were swimming upstream in their marketing efforts.
> > >
> > > Best Wishes
> >
> > I know it was some time ago now, but only the RTX 2010 had the
> > hardware
> > one-cylce mutiply/accumulate, etc instructions. The RTX 2000/1 had the
> > one-bit 16-cycle multiply/divide instructions.
> >
> > Someone recently posted (Jan or Feb 04) the urls to get the pdfs of
> > the
> > RTX 2000 manual and RTX 2010 Intersil users guide. I have both of
> > them.
> > If you can't find them on c.l.f. let me know and I'll email them to
> > you.
> >
> > I also used the Silicon Composer boards, with the RTX 2000s. I used
> > them primarily to test/compile code for our embedded systems, and to
> > play around with for other projects. A good thing about the RTX
> > 2000/10
> > were they were pin-for-pin compatible, and I could hand code the extra
> > 2010 instructions with the SC development software. Piece of cake.
> >
> > The 2000/10 were way ahead of their times (as was the original Novix).
> > Not only could you also do 32-to-16 bit squareroots, one-cycle
> > streaming
> > memory instructions, partition memory into USER segments which could
> > be accessed faster than regular ram space, stack partitioning, 3
> > counter
> > timers, fast external interrupt response, the clock could be
> > completely
> > turned off, and back on again without losing instructions, to run on
> > as
> > littel power as you could get away with.
> >
> > If only somebody like Motorola had taken hold of these chips. With
> > their
> > fabrication capabilities, and cpu marketing savy, they couldl have
> > really
> > done something with these desigsn. And think about just a 500 MHz RTX
> > type
> > chip know. Even then, the RTX performed at mulitple MIPs times its
> > operating
> > frequency (because of instructions compression - multiple Forth words
> > being performed by one RTX instruction)
> >
> > But of course, I'm just engaging in ex post factor wishful thinking.
> >
> > Yours
> >
> > Jabari
> 
> Imagine if you had to scroll down to here on every new message!!!
> 
> Jabari,
> 
> What's annoying about bottom posting?
> 
> Bottom-posting-gone-wild with a full thread quoted in the top of the
> message.  Like yours.
> 
> At least with top posting I can very quickly click through a bunch of
> messages and read the new post.  With bottom-posting-gone-wild you have to
> scroll to the bottom of every darn message to find the new text.  Not only a
> waste of bandwidth but annoying as hell.  See below for an example.
> 
> With top posting, if you are reading through a thread, you can quickly
> navigate up/down the thread and read it without further calisthenics.
> 
> If you want to bottom post (which is appropriate when you need paragraphs to
> be in context, or to address specific statements) at least take the time to
> clip and snip the relevant parts of the message you are replying to.
> 
> --
> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> Martin Euredjian

--- mixed top and bottom posting fixed ---

Personally, I don't care one way or another how people post, I'm not
interested in getting into the top/bottom posting wars.  But mixed
posting is even worse.  I know you were trying to make a point, but it
is of no value.  People will do what they want to do no matter how many
people them them to stop.  

Besides, it is only a single key stroke to go to the bottom of a page. 
Is that really a big problem?  

-- 

Rick "rickman" Collins

rick.collins@XYarius.com
Ignore the reply address. To email me use the above address with the XY
removed.

Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design      URL http://www.arius.com
4 King Ave                               301-682-7772 Voice
Frederick, MD 21701-3110                 301-682-7666 FAX

Article: 66447
Subject: Re: Dual-stack (Forth) processors
From: rickman <spamgoeshere4@yahoo.com>
Date: Thu, 19 Feb 2004 17:12:53 -0500
Links: << >>  << T >>  << A >>
Jerry Avins wrote:
> 
> Jeff Fox wrote:
> 
>    ...
> 
> > I never used the 80186 but I have heard from people who did use
> > it in embedded work and later the 386e for embedded was clearly
> > aimed for and used for some embedded work. (though not be me.)
> > Technically comparing RTX to 8085 or Z80 seemed compelling but
> > didn't seem to account for much. (Rocket scientists excepted.)
> 
>    ...
> 
> The 80186 was indeed designed for embedded work, including some on-chip
> peripherals. I forget what about it made it awkward to use compares to
> Z-80 and 6809, but I remember feeling that way. Probably my lack of
> familiarity bordering on ignorance.

I don't think it was awkward compared to any of the 8 bitters.  But it
was not *PC* compatible because the IO map was different.  I guess back
then everyone either wanted a lower priced PC equivalent or they wanted
more MIPs and the 186 did neither.  

-- 

Rick "rickman" Collins

rick.collins@XYarius.com
Ignore the reply address. To email me use the above address with the XY
removed.

Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design      URL http://www.arius.com
4 King Ave                               301-682-7772 Voice
Frederick, MD 21701-3110                 301-682-7666 FAX

Article: 66448
Subject: Re: Dual-stack (Forth) processors
From: "Jon Harris" <goldentully@hotmail.com>
Date: Thu, 19 Feb 2004 15:06:10 -0800
Links: << >>  << T >>  << A >>
"Jerry Avins" <jya@ieee.org> wrote in message
news:4035159f$0$3078$61fed72c@news.rcn.com...
>
> When I reply to a top-posted message with a sig -- like yours -- 
> everything below the sig is gone, as here. The _)(*&^%$#@!! news program
> snips it all silently.
>
> Jerry

What newsreader?  Sounds like a bug, or at least something that should have
a switch to defeat it.



Article: 66449
Subject: Re: Source code for NIOS GNU toolchain
From: jon@beniston.com (Jon Beniston)
Date: 19 Feb 2004 15:23:27 -0800
Links: << >>  << T >>  << A >>
jon@beniston.com (Jon Beniston) wrote in message news:<e87b9ce8.0402190800.2f11b513@posting.google.com>...
> "Kenneth Land" <kland1@neuralog1.com1> wrote in message news:<1038fig29qmvb27@news.supernews.com>...
> > Check again.  I'm downloading a version 3.1 that has todays date on it.
> 
> Bizzare. Perhaps they were in the process of updating it when I
> looked. I've now got it, but it fails to build GCC on either Linux or
> CygWin:
> 
> ./genattr ../../src/gcc/config/nios/nios.md > tmp-attr.h
> './../src/gcc/config/nios/nios.md:211: unknown rtx code `define_split
> ../../src/gcc/config/nios/nios.md:211: following context is `  [(set
> (match_operand:SI 0 "general_operand" "")'
> 
> Did it work for you?

Ah! You need to strip the \r's from nios.md!

Cheers,
JonB



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarApr2017

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search