Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 139025

Article: 139025
Subject: Re: Zero operand CPUs
From: Helmar <helmwo@gmail.com>
Date: Wed, 18 Mar 2009 11:52:53 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 18, 2:42=A0pm, "Antti.Luk...@googlemail.com"
<Antti.Luk...@googlemail.com> wrote:
> On Mar 18, 7:54=A0pm, Jacko <jackokr...@gmail.com> wrote:
>
>
>
>
>
> > On 18 Mar, 16:59, rickman <gnu...@gmail.com> wrote:
>
> > > On Mar 18, 8:36 am, Jacko <jackokr...@gmail.com> wrote:
>
> > > > \ FORTH Assembler for nibz
> > > > \
> > > > \ Copyright (C) 2006,2007,2009 Free Software Foundation, Inc.
>
> > > > \ This file is part of Gforth.
>
> > > > \ Gforth is free software; you can redistribute it and/or
> > > > \ modify it under the terms of the GNU General Public License
> > > > \ as published by the Free Software Foundation, either version 3
> > > > \ of the License, or (at your option) any later version.
>
> > > > \ This program is distributed in the hope that it will be useful,
> > > > \ but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > > \ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. =A0See the
> > > > \ GNU General Public License for more details.
>
> > > > \ You should have received a copy of the GNU General Public License
> > > > \ along with this program. If not, seehttp://www.gnu.org/licenses/.
> > > > \
> > > > \ Autor: =A0 =A0 =A0 =A0 =A0Simon Jackson, BEng.
> > > > \
> > > > \ Information:
> > > > \
> > > > \ - Simple Assembler
>
> > > > \ only forth definitions
>
> > > > require asm/basic.fs
>
> > > > =A0also ASSEMBLER definitions
>
> > > > require asm/target.fs
>
> > > > =A0HERE =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ( Begin )
>
> > > > \ The assembler is very simple. All 16 opcodes are
> > > > \ defined immediate so they can be inlined into colon defs.
>
> > > > \ primary opcode constant writers
>
> > > > : BA 0 , ; immediate
> > > > : FI 1 , ; immediate
> > > > : RI 2 , ; immediate
> > > > : SI 3 , ; immediate
>
> > > > : DI 4 , ; immediate
> > > > : FA 5 , ; immediate
> > > > : RA 6 , ; immediate
> > > > : SA 7 , ; immediate
>
> > > > : BO 8 , ; immediate
> > > > : FO 9 , ; immediate
> > > > : RO 10 , ; immediate
> > > > : SO 11 , ; immediate
>
> > > > : SU 12 , ; immediate
> > > > : FE 13 , ; immediate
> > > > : RE 14 , ; immediate
> > > > : SE 15 , ; immediate
>
> > > > =A0HERE =A0SWAP -
> > > > =A0CR .( Length of Assembler: ) . .( Bytes ) CR
>
> > > What instruction is a CALL? =A0How do you specify the address? =A0How=
 do
> > > you specify literal data?
>
> > > Rick
>
> > $addr ,
>
> Jacko
>
> your comments are just getting more and more fuzzy and cryptic.
>
> is that on purpose?
>
> Antti

$addr , is not in conflict with a CALL. For a literal Jacko has to
answer for my understandings.

Regards,
-Helmar

Article: 139026
Subject: Re: Zero operand CPUs
From: "Antti.Lukats@googlemail.com" <Antti.Lukats@googlemail.com>
Date: Wed, 18 Mar 2009 12:07:05 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 18, 3:51=A0pm, Jacko <jackokr...@gmail.com> wrote:
> On 18 Mar, 11:48, "Antti.Luk...@googlemail.com"
>
>
>
> <Antti.Luk...@googlemail.com> wrote:
> > On Mar 18, 8:45=A0am, rickman <gnu...@gmail.com> wrote:
>
> > > On Mar 17, 7:06=A0pm, Jacko <jackokr...@gmail.com> wrote:
>
> > > > There is now a link on the instruction set page presenting an engli=
sh
> > > > text description of the BO instruction. All other instructions foll=
ow
> > > > a similar symbology.
>
> > > Dude, you are really terrible at this. =A0In one place you tell peopl=
e
> > > about a CPU you designed with no specifics. =A0Another place you post=
 a
> > > link to a web page with very fuzzy descriptions of the instruction se=
t
> > > that is not usable. =A0Then here you post that you have added some mo=
re
> > > explanation, but no link. =A0Are we supposed to search around to find
> > > the link to your web page again? =A0I have no idea where to find it.
>
> > > I may not be very tactful, but I really am trying to help you, not be
> > > insulting. =A0I hope it doesn't come off that way.
>
> > > Rick
>
> > You are so right...
> > I looked (again) the nibz web
>
> > downloaded the most promising document (tagged FEATURED!)
>
> > did not understand much, scrolled to the end of the document..
> > where on the last line stands:
>
> > ".... ummmm ...."
>
> > guess this is what we all should be doing: ummmmmmm
>
> The umm does make sense in terms of what the document was describing.
>

Jacko:

....ummmmm....

does NEVER make sense in a document describing an IP core.

not in my world (sure maybe i live in the wrong one)

Antti











Article: 139027
Subject: Re: Bullshit! - Re: Zero operand CPUs
From: Herbert Kleebauer <klee@unibwm.de>
Date: Wed, 18 Mar 2009 21:48:11 +0100
Links: << >>  << T >>  << A >>
zwsdotcom@gmail.com wrote:
> 
> On Mar 18, 9:06 am, Helmar <hel...@gmail.com> wrote:
> 
> > What do you expect from something that does not need operands or has
> 
> Studying first-year computer science (machine architecture) might be
> helpful to you, since this is very standard terminology. Of course
> relatively few people have worked on zero-operand ISAs but I guess
> most people in this NG have worked extensively with 1- and 2-operand
> machines, and probably 3-operand also.

I suppose, most have worked with a 0-, 1-, 1 1/2-, 2-, 3- and 3*1/2 
ADDRESS machines but none with a 0- or 1- OPERAND machine.

An add instruction always needs two operands and has one results.
But if you use a predefined location (stack, accu) for some of them 
then you can omit the address information in the instruction.

Article: 139028
Subject: Re: Bullshit! - Re: Zero operand CPUs
From: Dombo <dombo@disposable.invalid>
Date: Wed, 18 Mar 2009 21:53:55 +0100
Links: << >>  << T >>  << A >>
Herbert Kleebauer schreef:
> zwsdotcom@gmail.com wrote:
>> On Mar 18, 9:06 am, Helmar <hel...@gmail.com> wrote:
>>
>>> What do you expect from something that does not need operands or has
>> Studying first-year computer science (machine architecture) might be
>> helpful to you, since this is very standard terminology. Of course
>> relatively few people have worked on zero-operand ISAs but I guess
>> most people in this NG have worked extensively with 1- and 2-operand
>> machines, and probably 3-operand also.
> 
> I suppose, most have worked with a 0-, 1-, 1 1/2-, 2-, 3- and 3*1/2 
> ADDRESS machines but none with a 0- or 1- OPERAND machine.
> 
> An add instruction always needs two operands and has one results.
> But if you use a predefined location (stack, accu) for some of them 
> then you can omit the address information in the instruction.


Maybe 'implicit operand' would be a more accurate description.

Article: 139029
Subject: Re: Xilinx XAPP052 LFSR and its understanding
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Wed, 18 Mar 2009 20:59:53 +0000 (UTC)
Links: << >>  << T >>  << A >>
Weng Tianxiang <wtxwtx@gmail.com> wrote:
(snip)
 
> In another words, if a seed data is closing or equal to all '1'
> situation, the LFSR is a shorter random number generator than its
> claim of a 63-bit length generator. There is no way to exactly know if
> a seed data is closing to all '1' situation.

There is much written about LFSR, so we don't need to explain
it all here.  Just to be sure you understand, if you start with
a state that isn't all '1' then it will never get to that
state.  (Assuming proper design.)   You do have to be sure
not to start in that state, though.  

-- glen

Article: 139030
Subject: Re: Zero operand CPUs
From: "Steve at fivetrees" <steve@NOSPAMTAfivetrees.com>
Date: Thu, 19 Mar 2009 00:00:13 -0000
Links: << >>  << T >>  << A >>
"David Brown" <david@westcontrol.removethisbit.com> wrote in message 
news:49bfc0c7$0$14779$8404b019@news.wineasy.se...
> rickman wrote:
>>
>> BTW, ZPU may have a GCC compiler, but without a debugger, is that
>> really useful?  There aren't many projects done in C that are debugged
>> without an emulator.
>>
>
> It's possible to do a lot of development without a debugger.  I often do 
> embedded development without one (though I prefer to have one available if 
> possible).  Until you've done debugging with only a single LED for 
> signalling, you haven't really done embedded development.  Bonus points if 
> the microcontroller you're using only comes in OTP version.

Ooh yeah - been there, done that. Big believer in the idea that if it's 
designed right, all you have to worry about is typos.

Haven't used an ICE (or a debugger) in about 25 years.

Steve
--
http://www.fivetrees.com 



Article: 139031
Subject: Re: Bullshit! - Re: Zero operand CPUs
From: "Steve at fivetrees" <steve@NOSPAMTAfivetrees.com>
Date: Thu, 19 Mar 2009 00:13:11 -0000
Links: << >>  << T >>  << A >>
"Dombo" <dombo@disposable.invalid> wrote in message 
news:49c15f65$0$28145$5fc3050@news.tiscali.nl...
>
> Maybe 'implicit operand' would be a more accurate description.

That is almost the first sensible comment I've read in this thread.

Steve
--
http://www.fivetrees.com 



Article: 139032
Subject: Re: Zero operand CPUs
From: rickman <gnuarm@gmail.com>
Date: Wed, 18 Mar 2009 17:38:25 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 18, 1:54=A0pm, Jacko <jackokr...@gmail.com> wrote:
> On 18 Mar, 16:59, rickman <gnu...@gmail.com> wrote:
>
>
>
> > On Mar 18, 8:36 am, Jacko <jackokr...@gmail.com> wrote:
>
> > > \ FORTH Assembler for nibz
> > > \
> > > \ Copyright (C) 2006,2007,2009 Free Software Foundation, Inc.
>
> > > \ This file is part of Gforth.
>
> > > \ Gforth is free software; you can redistribute it and/or
> > > \ modify it under the terms of the GNU General Public License
> > > \ as published by the Free Software Foundation, either version 3
> > > \ of the License, or (at your option) any later version.
>
> > > \ This program is distributed in the hope that it will be useful,
> > > \ but WITHOUT ANY WARRANTY; without even the implied warranty of
> > > \ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. =A0See the
> > > \ GNU General Public License for more details.
>
> > > \ You should have received a copy of the GNU General Public License
> > > \ along with this program. If not, seehttp://www.gnu.org/licenses/.
> > > \
> > > \ Autor: =A0 =A0 =A0 =A0 =A0Simon Jackson, BEng.
> > > \
> > > \ Information:
> > > \
> > > \ - Simple Assembler
>
> > > \ only forth definitions
>
> > > require asm/basic.fs
>
> > > =A0also ASSEMBLER definitions
>
> > > require asm/target.fs
>
> > > =A0HERE =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 ( Begin )
>
> > > \ The assembler is very simple. All 16 opcodes are
> > > \ defined immediate so they can be inlined into colon defs.
>
> > > \ primary opcode constant writers
>
> > > : BA 0 , ; immediate
> > > : FI 1 , ; immediate
> > > : RI 2 , ; immediate
> > > : SI 3 , ; immediate
>
> > > : DI 4 , ; immediate
> > > : FA 5 , ; immediate
> > > : RA 6 , ; immediate
> > > : SA 7 , ; immediate
>
> > > : BO 8 , ; immediate
> > > : FO 9 , ; immediate
> > > : RO 10 , ; immediate
> > > : SO 11 , ; immediate
>
> > > : SU 12 , ; immediate
> > > : FE 13 , ; immediate
> > > : RE 14 , ; immediate
> > > : SE 15 , ; immediate
>
> > > =A0HERE =A0SWAP -
> > > =A0CR .( Length of Assembler: ) . .( Bytes ) CR
>
> > What instruction is a CALL? =A0How do you specify the address? =A0How d=
o
> > you specify literal data?
>
> > Rick
>
> $addr ,

How many bits are in an opcode, 4 or 5?  I would say it has to be five
or there is no way for the machine to distinguish between an opcode
and an address.  In other words, there *has* to be a CALL instruction,
even if it is just a one bit opcode with the rest being the address.

Rick

Article: 139033
Subject: Re: Bullshit! - Re: Zero operand CPUs
From: rickman <gnuarm@gmail.com>
Date: Wed, 18 Mar 2009 17:53:48 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 18, 4:48=A0pm, Herbert Kleebauer <k...@unibwm.de> wrote:
> zwsdot...@gmail.com wrote:
>
> > On Mar 18, 9:06 am, Helmar <hel...@gmail.com> wrote:
>
> > > What do you expect from something that does not need operands or has
>
> > Studying first-year computer science (machine architecture) might be
> > helpful to you, since this is very standard terminology. Of course
> > relatively few people have worked on zero-operand ISAs but I guess
> > most people in this NG have worked extensively with 1- and 2-operand
> > machines, and probably 3-operand also.
>
> I suppose, most have worked with a 0-, 1-, 1 1/2-, 2-, 3- and 3*1/2
> ADDRESS machines but none with a 0- or 1- OPERAND machine.
>
> An add instruction always needs two operands and has one results.
> But if you use a predefined location (stack, accu) for some of them
> then you can omit the address information in the instruction.

Everyone here who is objecting to the concept of a zero-operand CPU
seems to be defining the term for themselves and then arguing about
it.  A zero-operand instruction set is not one where the operators
*have* no operands.  It is one where the instruction has to *specify*
no explicit operands.  The operands are ***implied*** by the
architecture of the machine.  e.g. a stack machine can use the top two
elements of the stack as the operands and can put the result back on
the stack.  No operands need to be *specified* because that is
*implied* in the architecture.

Why would anyone want to argue about the name of a concept rather than
to *ask* someone to explain the concept?

Rick

Article: 139034
Subject: Re: Zero operand CPUs
From: Eric Smith <eric@brouhaha.com>
Date: Wed, 18 Mar 2009 18:09:13 -0700
Links: << >>  << T >>  << A >>
"Antti.Lukats@googlemail.com" <Antti.Lukats@googlemail.com> writes:
> LUT6 hopefully means also SRL64 what would be cool, but eh it should

Some of the Virtex 6 and Spartan 6 LUTs are configurable as SRL32 as in
Virtex 5.  No SRL64, so you'll have to use two LUTs for that.

I seem to recall that one of the Xilinx employees explained to us a
while back that the latch structure used in the LUT required either
two latches per bit or some other magic to make it work properly as a
shift register (e.g., avoid a race where the input falls through
multiple level-triggered latch bits), and that they've left that magic
out of the newer slice designs.

Obviously there's some details we customers don't know about since
they manage to shift one bit at a time through all 64 latches in the
6-LUT as part of the configuration process.

Article: 139035
Subject: Re: Zero operand CPUs - debugging
From: "Chris Burrows" <cfbsoftware@hotmail.com>
Date: Thu, 19 Mar 2009 12:41:32 +1030
Links: << >>  << T >>  << A >>
"David Brown" <david@westcontrol.removethisbit.com> wrote in message 
news:49bfc0c7$0$14779$8404b019@news.wineasy.se...
>
> It's possible to do a lot of development without a debugger.  I often do 
> embedded development without one (though I prefer to have one available if 
> possible).  Until you've done debugging with only a single LED for 
> signalling, you haven't really done embedded development.  Bonus points if 
> the microcontroller you're using only comes in OTP version.
>

I agree. From my experience, if, when writing the code, I write it assuming 
that I will not have access to a debugger, there's an excellent chance I 
won't actually need one when testing it. It might take me a bit of extra 
thinking at coding time but, considering that coding is typically 10% of the 
initial development effort and testing is typically 50%, the resulting 
savings in testing time more than compensate.  For software destined to have 
a long life-cycle, the subsequent gains during the period of program 
maintenance are even more substantial.

Consequently, I haven't had to resort to using a debugger for embedded 
development yet.  I'm convinced that, for me, not having to program in C is 
a significant factor - 95% of my errors are picked up at compile time. 
Typically most of the remainder are detected at runtime - as and when they 
happen, not 5 minutes later in some other unrelated part of the code!

--
Chris Burrows
CFB Software
Armaide: ARM Oberon Development System for Windows
http://www.cfbsoftware.com/armaide



Article: 139036
Subject: Re: Xilinx XAPP052 LFSR and its understanding
From: Peter Alfke <alfke@sbcglobal.net>
Date: Wed, 18 Mar 2009 20:39:46 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 18, 10:04=A0am, Weng Tianxiang <wtx...@gmail.com> wrote:
> Hi,
> I want to generate a random number in a FPGA chip and in a natural way
> it leads to a famous paper Xilinx XAPP052 authored by Peter Alfke:
> "Efficient Shift Register, LFSR Counters, and Long Pseudo-Random
> Sequence Generators".
>
> I have two problems with the note.
>
> 1. I don't understand "Divide-by-5 to 16 counter". I appreciate if
> someone explain the Table 2 and Figure 2 in more details.
>
> 2. In Figure 5, there is an equation: (Q3 xnor Q4) xor (Q1 and Q2 and
> Q3 and Q4).
> (Q3 xnor Q4) is required to generate 4-bit LFSR counter. (Q1 and Q2
> and Q3 and Q4) is used to avoid a dead-lock situation from happening
> when Q1-Q4 =3D '1'.
>
> Now the 4-bit LFSR counter dead-lock situation should be extended to
> any bits long LFSR counter if 2 elements XNOR operation is needed.
> Especially in Figure 5 for 63-bit LFSR counter. When all 63-bits are
> '1', it would be dead-locked into the all '1' position, because (Q62
> xnor Q63) =3D '1' if both Q63 and Q62 are '1'. =A0But the situation is
> excluded into the equation in Figure 5.
>
> In another words, if a seed data is closing or equal to all '1'
> situation, the LFSR is a shorter random number generator than its
> claim of a 63-bit length generator. There is no way to exactly know if
> a seed data is closing to all '1' situation.
>
> We can add logic equation as 4-bit situation does as follows:
> (Q62 xnor Q63) xor (Q1 and Q2 and ... and Q63).
>
> There is a new question: If there is a more clever idea to do the same
> things to avoid the 63-bit dead-lock situation from happening?
>
> Weng

Weng, there is no mystery. All the LFSRs that I described count by (2
exp n)-1, since they naturally will never get into the all-ones state.
If you want them to include that state, you need to decode the state
one prior, use one gate to invert the input, and the same gate gets
you out of it again. The "high" cost is that one very wide gate,
nothing else. LFSRs are well-documented. I had just dug up some old
information that I had generated at Fairchild Applications in the
'sixties.
Peter Alfke

Article: 139037
Subject: Re: Bullshit! - Re: Zero operand CPUs
From: Jeff Fox <fox@ultratechnology.com>
Date: Wed, 18 Mar 2009 20:59:11 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 18, 9:36=A0am, Helmar <hel...@gmail.com> wrote:
> On Mar 18, 11:14=A0am, Jeff Fox <f...@ultratechnology.com> wrote:
> > On Mar 18, 6:06=A0am, Helmar <hel...@gmail.com> wrote:
>
> > > What do you expect from something that does not need operands or has
> > > only the operand "0" zero? It can not do much useful things - ok, it
> > > could heat the room it is inside.
>
> > Zero operand just means that the top of the stack is the implied
> > operand for most instructions. =A0It means that a register decoding
> > phase is not needed in the execution of instructions because the
> > operand is the stack.
>
> OK, same then: a TOS can not be the only operand. You very probably
> mean of course that the stack contains the operands.

No.  The term refers to the instructions, instruction format, and
use of operand fields in the instruction format.  Instructions don't
reside on the stacks nor do the operands inside of the instructions.

The name 'zero operand' refers to the number of operand fields in
the instruction set.  On a register machine you need operand
fields in the instructions that have to be decoded after the type
of instruction is decoded.

An example of a three operand instrction would be XOR register 01
with register 02 and write the result to register 03.  This type
of register operation is using three operands.  Some have more
some have less.  The path to the data operated on is designated
by operand fields in the instructions.

The equivalent instruction on a stack machine is just XOR.  There
are zero operands in the *instruction* to be decoded.  That's
what the term means.  The path to the data is hard wired to be
an exclusive or the top of the stack and the next item on the
stack with the result replacing these values as the new top of
stack.  Since that is all implied ZERO operand fields are
needed in the instruction.

> This is indeed
> clever but does not mean "zero operands".

You seem to completely missunderstand the term and seem to be
claiming that it must mean NOP only. ;-)

> Regarding this "register decoding phase" - well, you have to
> "decode" it if you make the distinction between "operation" and
> "operand".

Now when the path to the data for each instruction is hard wired
and there are no operand fields in the instructions.

> But if you see "ADD A,
> 1" as a different operation than "ADD B,1",

Same instruction but different operand fields to select the path
to the data.  Yes.   On a zero operand design there is just ADD.
There is no set of operands to specifiy the path for an ALU to
use on the add.  The path to data is hard wired on zero operand
designs. This is obvious because the "instruction" is just "ADD"
and contains no operands, zero operands.

In the case of your register opcode example the operand bits may
not be the same for all instructions.  So first one has to decode
that it is an instruction with two register operands. The operands
can then be used to gate the ALU.  The after that the ALU operates
on the data.

> you do not have any
> "phase" in front of knowing what operation is needed.

Only if all instructions in the instruction set have the same format.
If there are any eight bit instructions, or larger instructions,
or instructions that don't have exactly two operands at fixed
positions
then there is separate phase.

But I think you missed my point.  I was saying that with a zero
operand
design the paths to operands are fixed.  In the designs mentioned in
the
original post in the thread indirectly all opcodes begin execution at
the same time that the decoding of an instruction begins.  When the
instruction decoding is complete the instruction has completed any
logical operation and it is selected as the instruction to write its
results to the system.  So there is an instruction decode phase and
a write result phase.

In the kind of register designs you are describing the paths to the
ALU are selected only after the appropriate instruction format is
known.
So first the instruction is decoded, then knowing which operands mean
what to this instruction the paths to the ALU are set up. Then and
only
then the ALU operates on this path and the results are written to the
appropriate place which might be specified by an operand decoded after
the instruction format is known.

The point you seem to have missed is that one way to describe that is
to say that a phase is needed between instruction decoding and the
ALU operation because operands are used.  When operand decoding is
needed the ALU can't perform the "ADD A,1" until after the instruction
is decoded.  On zero operand designs the "ADD" can begin execution
at the same time that instruction decoding begins.

> It simply could
> be two different operations. So if you term a "stack" layout based
> thing "zero operand" you could also call every "register" based layout
> to be "zero operand", as far as every register has its own
> implementation of "ADD".

But that's not how real designs work.  Every combination of registers
does not have its own hard wired ALU.  ;-)

An ALU requires operands to gate its input and output when it is
shared.
There is not a dedicated ALU for every possible path in these
designs.
In contrast there may not be an ALU at all in a zero operand design,
+ has an addition circuit, XOR has an xor circuit, etc.   And as
result no operands are needed in the instruction to gate and ALU.

> > Instead of having to decode which registers are gated for input
> > and which register is gated for output, and set up the gates to
> > the ALU as some of the steps in executing an instruction the path
> > to operands is hard wired because it is the top of the stack. =A0So
> > there are no operands to decode.
>
> There are still operands to decode. It might be the stack is somewhere
> in memory

Again the discussion here was about Chuck Moore's zero operand designs
which have register based stacks not stacks in memory.  Stacks in
memory
is a whole other thing than the subject that was introduced here.

The original posted stated that a certain design with uncached stacks
in
memory was slow because a simple operation like "+" would require
four
memory accesses: load the opcode, load a parameter from a stack in
memory,
load a parameter from a stack in memmory, write the result back to
the stack in memory.   In contrast we have been talking about stacks
in registers and only one instruction memory access per four stack
instructions.  So the thread began by contrasting four memory accesses
per stack operation to four stack operations per memory access.

> or it's something like a few registers without possibility
> to address them. There are still operands and they need to be
> addressed and as that also "decoded" (even if this is not needed by
> the symbolical instruction representation called "machine code").

Let's take your example "+" and note that in a zero operand design
that's the instruction, just  "+" because there are zero operands.

There are zero operands needed in the instruction. It is different
than "ADD A,1" because no operand specifying an ALU path is needed.

In the zero operand version the arguments are in the T register and
the S register and the result replaces T and S is popped from lower
stack registers.   This doesn't have to be specified in programming
the machine with operand fields in the instruction.  The path for
the add instruction is hard wired.  No operand for this path is
needed in the instruction set.  It is a simple idea.

> > It is not very clever to try to convince people that it means that
> > there are no arguements at all and thefore all it can do is NOP.
>
> Well, without further instruction, everybody will think it is so.

I doubt it. I think most people know that zero operand architecture
means
a stack machine where arguments are mostly on a data stack and need no
operands in the instruction set.  I have never heard of anyone else
every
assuming it means NOP only because no arguments are used. ;-)

I have never heard anyone claim that all one can do in Forth is NOP.
I have seen people demonstrate that "they" can't write good code but
that proves very little.

> > It certainly does not mean that it cannot do useful things. =A0After al=
l
> > some languages like Forth are also zero operand because they use
> > the top of a parameter stack. =A0Parameters are still needed for
> > branching and for literals but they are not operand fields for
> > registers that require a decoding phase hence the name zero operand.
>
> > Some people will declare that programming a register machine is
> > just easiter than programming a stack machine. =A0Their argument
> > is that the compiler they use makes it easy or that they can
> > stay at a higher level of abstraction.
>
> ;) OK, that declarations does not interest me, as far as even mobile
> phones start to be implemented using x86 technology...

I hadn't heard that.  Which cell phones have x86?

> > Some people will declare that programming a stack machine is just
> > easier than programming a register machine. =A0Their argument is that
> > their compiler and language is simpler and smaller and doesn't
> > need all that complicated register stuff. =A0They may say that to
> > program a register machine people need complex smart
> > compilers to hide the real complexity under the hood and they
> > prefer something simpler.
>
> OK, in Forth:
>
> : foo dup ;
> : foo1 ! ;
> : foo2 swap foo tuck foo1 ;
>
> This now looks like a completely "useless" complicated sequence,
> nobody would write this way.

I agree. It is really terrible code.  One can write really terrible
code in any language. It doesn't prove much.  It looks nothing like
real Forth code. ;-)

> The truth is, that the more you use that nice "symbolic" stuff in
> Forth, the more that "useless" complicated sequences occur in output.

It sounds like you haven't learned much about Forth.

I recall one programmer at SVFIG lament that for him the only way to
make money with Forth was to keep it a secret.  When he made the
mistake
of telling clients that he had written their application in Forth they
realized that the nice symbolic stuff was so clear that even the
project
manager with very little knowledge of Forth could update the code and
make
changes that worked and that they never needed to call the expert back
in
again for maintenance.  He said if he didn't tell them that it was a
Forth
program they were more likely to call him for updates.

> If you want to allow your users to use that symbolic stuff - and you
> want to avoid too "useless" operations, even the stack based compiler-
> thing has to optimize.

I want them to have access to the nice symbolic abstractions indeed.
I want them to never have to deal with useless code like you wrote.
Twenty years ago I argued with Chuck Moore about compiler
optimizations.

I have written compilers with a few hundred such optimizations.  But
after
a few more years I switched to a style of writing much simpler code
where
the compiler had no opportunity to optimize more than a little
inlining
and tail recursion.   Again, the original poster simply said that he
now saw that Chuck Moore had the right idea about simplicity in design
but he didn't detail what that meant in detail.

In one extreme people want chips that they say they can't program
without
a very smart optimizing compiler.  In the other extreme people say
that
if you write smarter source code then the compiler doesn't have to be
so smart.  When asked about optimizing compilers for simple chips like
those being discussed Chuck Moore asked, "Why?  You want to write non-
optimal code?"

> Except for few, simple loophole-optimizations,
> the stack based design of a processor does not help with it. The
> compiler gets at least as complicated as every other - or maybe even
> more complex (this would have to be done first I think).

Chuck uses colorforth exclusively.  It has a "brutally simple"
compiler.
His first generation of Forth hardware required a fairly complex
optimizing native code Forth compiler because of the irregular
nature of the instruction packing.  It used 6K of RAM and he felt
that was way too complicated so he simplified the compiler to just a
couple of K.

If you are claiming that his tools are the size of others then you
are not very well informed.  It may run on big Pentium machines but
the Pentium compiler is simple and small as is his target compiler
for embedded chips.

Some of my own research has been into how much smaller than 1K an
optimzing native code compiler for one of these machines can be.
I have written a lot of them.  They are certainly not as big
as everything else. ;-)

> > Which is easier depends entirely on what you know and
> > the problem at hand.
>
> Sure? I mean the "problem at hand"? It would be interesting to know if
> there are differences depending on the problem.

I like to acknowledge that some people have legitmate problems to
solve
that don't occur for other people.  When you get an embedded
programming
job they might say, "We are building our widget using this processor
and this language."  If that happens then that implies a long list of
assumptions about how the problem to be solved is constrained by
processor and language features.

At another job an employee might be told "We used to make our
widgets with this processor and program them in this language but
now we are making a new product and want to examine all our options
since we are not constrained by having already made decisions about
the target platform and target tools."

Sometimes the object is low production cost because quantity is high
and development cost is not so important.  Time to market might
matter.
Sometimes low volume or one-of projects create problems where
development cost or time are all that matters.   You know the old
saying, "Fast, cheap, soon, pick two."  I like to extend it and
say "Fast, cheap, soon, standard, pick two."

Best Wishes

Article: 139038
Subject: Re: Bullshit! - Re: Zero operand CPUs
From: Jeff Fox <fox@ultratechnology.com>
Date: Wed, 18 Mar 2009 21:19:41 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 18, 9:46=A0am, "Antti.Luk...@googlemail.com"
<Antti.Luk...@googlemail.com> wrote:
> Jeff & Helmar
>
> another holy war?

Certainly not.  I was just trying to inform Helmar that the term
zero operand architecture doesn't mean NOP only as he claims. ;-)
And that led to my explaining other things that he seemed confused
about.

> there are pro and contra for everything.
>
> what is sure is that as of today the register based (or maybe 0
> register ones like mproz3)
> small FPGA oriented soft-core processors are EASIER to deal with then
> those that are forth like and stack based

That is true for most people since they know those tools.

I know a lot of people who have been programming in Forth for decades
or designing soft-core processors and programming them for decades.

What is "easiest" is what you know.

I was a teaching assistant in a UC course on processor design.  A
small register based design was used to teach students one semester
and a small stack based design was used to teach students in
another semester so the instruction could observe which were
easier for the students to get.

> be the reason inbetween the ears, or in bad documentation or missing
> tools, whatever.

Well certainly if you pick a first cut hobby design, or design by
someone who can't write or hasn't written documentation it is going
to be much harder to use than something that has been debugged,
optimized and documented.  But that's not inherent in comparing
register designs to stack designs.

But then I have had the opportunity to compare new students reactions
to nice simple tutorial designs for register based and stack based
machines of similar quality from the same author. And I have watched
them deal with debugging their hardware and software.

> Thing is non-stack based small soft-cores are easier for normal FPGA
> users

Yes. ;-) .

> I would not mind if it would be the other way around, but it isnt.

I probably would.  But that's a different discussion altogether.

> So some forth
> guy could make some effort to describe and document the use of some
> small SoC that uses stack - based core and is not programmed in C
> (that is some
> thing else then ZPU what is stack cpu with GCC support)

That's why I made the videos of the stack machine design for FPGA
course done for SVFIG ten years ago available to the public and
why I have bothered to answer questions about it for a decade
and helped a number of people with their designs.

But I have been clear that like Chuck Moore my interests moved
from FPGA to full custom twenty years ago.  But I have worked
with some nice FPGA implementations along the way.

> Antti
> (hm i should look the B16.. again )

It is hard to get everything at once.  Often you need to
look a number of times to see the connections.

Best Wishes

Article: 139039
Subject: Re: Xilinx XAPP052 LFSR and its understanding
From: Weng Tianxiang <wtxwtx@gmail.com>
Date: Wed, 18 Mar 2009 23:22:20 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 18, 8:39=A0pm, Peter Alfke <al...@sbcglobal.net> wrote:
> On Mar 18, 10:04=A0am, Weng Tianxiang <wtx...@gmail.com> wrote:
>
>
>
>
>
> > Hi,
> > I want to generate a random number in a FPGA chip and in a natural way
> > it leads to a famous paper Xilinx XAPP052 authored by Peter Alfke:
> > "Efficient Shift Register, LFSR Counters, and Long Pseudo-Random
> > Sequence Generators".
>
> > I have two problems with the note.
>
> > 1. I don't understand "Divide-by-5 to 16 counter". I appreciate if
> > someone explain the Table 2 and Figure 2 in more details.
>
> > 2. In Figure 5, there is an equation: (Q3 xnor Q4) xor (Q1 and Q2 and
> > Q3 and Q4).
> > (Q3 xnor Q4) is required to generate 4-bit LFSR counter. (Q1 and Q2
> > and Q3 and Q4) is used to avoid a dead-lock situation from happening
> > when Q1-Q4 =3D '1'.
>
> > Now the 4-bit LFSR counter dead-lock situation should be extended to
> > any bits long LFSR counter if 2 elements XNOR operation is needed.
> > Especially in Figure 5 for 63-bit LFSR counter. When all 63-bits are
> > '1', it would be dead-locked into the all '1' position, because (Q62
> > xnor Q63) =3D '1' if both Q63 and Q62 are '1'. =A0But the situation is
> > excluded into the equation in Figure 5.
>
> > In another words, if a seed data is closing or equal to all '1'
> > situation, the LFSR is a shorter random number generator than its
> > claim of a 63-bit length generator. There is no way to exactly know if
> > a seed data is closing to all '1' situation.
>
> > We can add logic equation as 4-bit situation does as follows:
> > (Q62 xnor Q63) xor (Q1 and Q2 and ... and Q63).
>
> > There is a new question: If there is a more clever idea to do the same
> > things to avoid the 63-bit dead-lock situation from happening?
>
> > Weng
>
> Weng, there is no mystery. All the LFSRs that I described count by (2
> exp n)-1, since they naturally will never get into the all-ones state.
> If you want them to include that state, you need to decode the state
> one prior, use one gate to invert the input, and the same gate gets
> you out of it again. The "high" cost is that one very wide gate,
> nothing else. LFSRs are well-documented. I had just dug up some old
> information that I had generated at Fairchild Applications in the
> 'sixties.
> Peter Alfke- Hide quoted text -
>
> - Show quoted text -

Hi Peter,
Thank you.

The reason I want to exclude the dead-lock situation is that in my
project, I use the random number generator to generate random number
to detect design errors. If there is an error, my design will detect
it. But if all numbers generated are the same from some point, there
is no error generated and my testing is just waiting time, giving a
false correct indicator.

But many zeros in seed may guarantee that the situation of all '1'
will never happen.

Weng

Article: 139040
Subject: Re: Xilinx XAPP052 LFSR and its understanding
From: hal-usenet@ip-64-139-1-69.sjc.megapath.net (Hal Murray)
Date: Thu, 19 Mar 2009 01:45:29 -0500
Links: << >>  << T >>  << A >>

>But many zeros in seed may guarantee that the situation of all '1'
>will never happen.

What are you using for a seed?  The whole register?  If so, the
only seed that will get stuck in a loop is all 1s itself, so
just "don't do that".

Or mask off one of the bits in hardware.  Any one, it doesn't
matter.  Just something to make sure the whole register
can't get loaded with all 1s.


-- 
These are my opinions, not necessarily my employer's.  I hate spam.


From jethomas5@gmail.com Thu Mar 19 00:26:40 2009
Path: flpi141.ffdc.sbc.com!flph199.ffdc.sbc.com!prodigy.com!flph200.ffdc.sbc.com!prodigy.net!bigfeed.bellsouth.net!news.bellsouth.net!cyclone1.gnilink.net!gnilink.net!nx02.iad.newshosting.com!newshosting.com!69.16.185.16.MISMATCH!npeer02.iad.highwinds-media.com!news.highwinds-media.com!feed-me.highwinds-media.com!post02.iad.highwinds-media.com!newsfe22.iad.POSTED!7564ea0f!not-for-mail
From: Jonah Thomas <jethomas5@gmail.com>
Newsgroups: comp.lang.forth,comp.arch.embedded,comp.arch.fpga
Subject: Re: Bullshit! - Re: Zero operand CPUs
Message-Id: <20090319032640.7c8ede9b.jethomas5@gmail.com>
References: <95f3256f-1304-48ba-a982-15562c1cc585@h20g2000yqn.googlegroups.com>
	<6bd57e06-9044-4b46-8880-789c8c30bc0c@d19g2000yqb.googlegroups.com>
	<45f1d557-e8d4-4783-97a5-63a8c9ad151a@33g2000yqm.googlegroups.com>
	<0a5963a7-6a32-4e52-bc88-bcd015209c7e@y9g2000yqg.googlegroups.com>
	<2f091fe6-3964-4285-8cdf-bff980e232c2@p11g2000yqe.googlegroups.com>
X-Newsreader: Sylpheed version 0.9.3 (GTK+ 1.2.10; i686-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit
Lines: 60
NNTP-Posting-Host: 72.196.198.155
X-Complaints-To: newsmaster@cox.net
X-Trace: newsfe22.iad 1237447651 72.196.198.155 (Thu, 19 Mar 2009 07:27:31 UTC)
NNTP-Posting-Date: Thu, 19 Mar 2009 07:27:31 UTC
Organization: Cox
Date: Thu, 19 Mar 2009 03:26:40 -0400
Xref: prodigy.net comp.lang.forth:140980 comp.arch.embedded:302849 comp.arch.fpga:152385
X-Received-Date: Thu, 19 Mar 2009 03:27:31 EDT (flpi141.ffdc.sbc.com)

Jeff Fox <fox@ultratechnology.com> wrote:

> The name 'zero operand' refers to the number of operand fields in
> the instruction set.  On a register machine you need operand
> fields in the instructions that have to be decoded after the type
> of instruction is decoded.
> 
> An example of a three operand instrction would be XOR register 01
> with register 02 and write the result to register 03.  This type
> of register operation is using three operands.  Some have more
> some have less.  The path to the data operated on is designated
> by operand fields in the instructions.
> 
> The equivalent instruction on a stack machine is just XOR.  There
> are zero operands in the *instruction* to be decoded.  That's
> what the term means.  The path to the data is hard wired to be
> an exclusive or the top of the stack and the next item on the
> stack with the result replacing these values as the new top of
> stack.  Since that is all implied ZERO operand fields are
> needed in the instruction.

....
 
> In the kind of register designs you are describing the paths to the
> ALU are selected only after the appropriate instruction format is
> known.  So first the instruction is decoded, then knowing which
> operands mean what to this instruction the paths to the ALU are set
> up. Then and only then the ALU operates on this path and the results
> are written to the appropriate place which might be specified by an
> operand decoded after the instruction format is known.
> 
> The point you seem to have missed is that one way to describe that is
> to say that a phase is needed between instruction decoding and the
> ALU operation because operands are used.  When operand decoding is
> needed the ALU can't perform the "ADD A,1" until after the instruction
> is decoded.  On zero operand designs the "ADD" can begin execution
> at the same time that instruction decoding begins.

So to make it perfectly clear, the bottom line here is that the
zero-operand approach can be simpler.

With the operands embedded in the instruction it's more flexible. You
get to choose which registers to use. But that choice has to get decoded
every time. Extra work for the processor, which has to be more
complicated. It takes time to do that, and the larger instructions take
a bigger bus or more space on a large bus.

If you can program so that what you need *will* be on the top of the
stack then you can avoid that overhead. Sometimes you might have to
juggle the stack which adds overhead, but you only have to do that
sometimes. The 1 2 or 3 operand instructions have their overhead all the
time, plus the extra gates on the chip are there all the time etc.

Requiring the programmer to learn how to manipulate a data stack is an
overhead. But it pays off. It's an overhead that happens not at
execution time, or compile time, or edit time, but at *education* time.
Learn how to do it and it pays off throughout your working lifetime,
whenever you have the chance to use a zero-operand architecture.

I apologise for stating the obvious.

Article: 139041
Subject: Groundhog 2009 ...
From: JPS Nagi <jpsnagi@gmail.com>
Date: Thu, 19 Mar 2009 00:33:23 -0700 (PDT)
Links: << >>  << T >>  << A >>
Has anyone heard or worked on Groundhog 2009.
This is some specs on energy consumption / power for programmable
devices on the mobile platform.

Article: 139042
Subject: Re: Bullshit! - Re: Zero operand CPUs
From: rickman <gnuarm@gmail.com>
Date: Thu, 19 Mar 2009 00:43:41 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 19, 3:26 am, Jonah Thomas <jethom...@gmail.com> wrote:
> Jeff Fox <f...@ultratechnology.com> wrote:
> > The name 'zero operand' refers to the number of operand fields in
> > the instruction set.  On a register machine you need operand
> > fields in the instructions that have to be decoded after the type
> > of instruction is decoded.
>
> > An example of a three operand instrction would be XOR register 01
> > with register 02 and write the result to register 03.  This type
> > of register operation is using three operands.  Some have more
> > some have less.  The path to the data operated on is designated
> > by operand fields in the instructions.
>
> > The equivalent instruction on a stack machine is just XOR.  There
> > are zero operands in the *instruction* to be decoded.  That's
> > what the term means.  The path to the data is hard wired to be
> > an exclusive or the top of the stack and the next item on the
> > stack with the result replacing these values as the new top of
> > stack.  Since that is all implied ZERO operand fields are
> > needed in the instruction.
>
> ....
>
> > In the kind of register designs you are describing the paths to the
> > ALU are selected only after the appropriate instruction format is
> > known.  So first the instruction is decoded, then knowing which
> > operands mean what to this instruction the paths to the ALU are set
> > up. Then and only then the ALU operates on this path and the results
> > are written to the appropriate place which might be specified by an
> > operand decoded after the instruction format is known.
>
> > The point you seem to have missed is that one way to describe that is
> > to say that a phase is needed between instruction decoding and the
> > ALU operation because operands are used.  When operand decoding is
> > needed the ALU can't perform the "ADD A,1" until after the instruction
> > is decoded.  On zero operand designs the "ADD" can begin execution
> > at the same time that instruction decoding begins.
>
> So to make it perfectly clear, the bottom line here is that the
> zero-operand approach can be simpler.
>
> With the operands embedded in the instruction it's more flexible. You
> get to choose which registers to use. But that choice has to get decoded
> every time. Extra work for the processor, which has to be more
> complicated. It takes time to do that, and the larger instructions take
> a bigger bus or more space on a large bus.

"Simpler" and "extra work" are relative terms.  Not necessarily
relative to other processors, but relative in terms of how you design
the processor.  Or maybe I should say, "it depends".  You can create a
very simple register based processor.  If the opcodes use a fixed
field for the registers selected, there is *no* register selection
decoding required, so that is not more complicated and there is no
extra work.  In fact, I think a register based design can be simpler
since it can reduce the mux required for input to the stack/register
file.

I think the difference is in the opcodes.  A register based processor
requires the registers to be specified, at least in conventional
usage.  I expect there are unusual designs that "imply" register
selection, but they are not used much in practice.  The MISC model,
which typically uses a stack, really only needs opcodes to specify the
operations and not the operands, so it can be very small.  Several
MISC designs use a 5 bit opcode.  This can reduce the decoding logic
and the amount of program storage needed.  But in both of these, the
devil is in the details.  Still, the potential is clearly there.


> If you can program so that what you need *will* be on the top of the
> stack then you can avoid that overhead. Sometimes you might have to
> juggle the stack which adds overhead, but you only have to do that
> sometimes. The 1 2 or 3 operand instructions have their overhead all the
> time, plus the extra gates on the chip are there all the time etc.

Different overhead.  Adding a few gates to decode a different opcode,
*if* it actually requires more gates, is not a big deal.  The extra
instructions needed to manipulate the contents of the stack take code
space and execution time and may or may not result in a "simpler"
processor.


> Requiring the programmer to learn how to manipulate a data stack is an
> overhead. But it pays off. It's an overhead that happens not at
> execution time, or compile time, or edit time, but at *education* time.
> Learn how to do it and it pays off throughout your working lifetime,
> whenever you have the chance to use a zero-operand architecture.
>
> I apologise for stating the obvious.

I am finding that none of this is truly obvious and may not always be
true.  Real world testing and comparisons are in order, real tests
that we can all see and understand...

Rick

Article: 139043
Subject: Re: Xilinx XAPP052 LFSR and its understanding
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Thu, 19 Mar 2009 07:46:01 +0000 (UTC)
Links: << >>  << T >>  << A >>
Weng Tianxiang <wtxwtx@gmail.com> wrote:
 
> The reason I want to exclude the dead-lock situation is that in my
> project, I use the random number generator to generate random number
> to detect design errors. If there is an error, my design will detect
> it. But if all numbers generated are the same from some point, there
> is no error generated and my testing is just waiting time, giving a
> false correct indicator.
 
> But many zeros in seed may guarantee that the situation of all '1'
> will never happen.

Any zeros in the seed will guarantee that it never happens.

The only way to get to the all ones state is to start there.
(Well, there is also cosmic rays going through and changing
the bits, but if that happens you have other problems, too.)

That does assume a properly designed LFSR.  If you randomly
choose taps it is likely that you get one with short cycles.

-- glen


Article: 139044
Subject: Re: Documenting a simple CPU
From: rickman <gnuarm@gmail.com>
Date: Thu, 19 Mar 2009 00:57:36 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 18, 9:29=A0am, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com>
wrote:
> Some of us have been a little frustrated by the
> hard-to-follow documentation of the nibz CPU.
> I thought it might be useful, as a comparison,
> to expose to public ridicule the documentation
> for a toy CPU I developed as fodder for some of
> our training courses. =A0It doesn't yet have an
> assembler (sorry Antti!!!) but I hope you will
> agree that the docs are complete enough for you
> to write one should you so wish.
>
> I'm not trying to stir up interest in this CPU
> design, even though I think it's quite cute;
> there are far too many RISC soft-cores out there
> already. =A0I just wanted to give an example of how
> you might go about documenting such a thing without
> putting in too much effort. =A0Its instruction set
> is roughly of the same complexity as nibz, I think.
>
> http://www.oxfordbromley.plus.com/files/miniCPU/arch.pdf

I think this is a very interesting processor spec.  It is very simple,
yet very functional.  I can see many applications for it.  A
comment... The logical shift left operation is not really needed
unless I am mistaken.  Since the opcode is free to specify any
combination of operands, you can use the same value for operand A and
B with an ADD operation which will result in a left shift.  Have any
of the students noticed this?

Rick

Article: 139045
Subject: Re: Documenting a simple CPU
From: -jg <Jim.Granville@gmail.com>
Date: Thu, 19 Mar 2009 01:14:31 -0700 (PDT)
Links: << >>  << T >>  << A >>
Jonathan Bromley wrote:

> Some of us have been a little frustrated by the
> hard-to-follow documentation of the nibz CPU.
> I thought it might be useful, as a comparison,
> to expose to public ridicule the documentation
> for a toy CPU I developed as fodder for some of
> our training courses.  It doesn't yet have an
> assembler (sorry Antti!!!) but I hope you will
> agree that the docs are complete enough for you
> to write one should you so wish.

Err, with no assembler, how do you run the compiled cpu ?

Add one yourself to this ? AS Assembler ?
http://john.ccac.rwth-aachen.de:8000/as/download.html

Good docs, but missing is a resource report ?
Given  the 4+year time line, perhaps a couple of target FPGAs
at opposite ends of that time-frame ?

I see it's a 3 operand design, and 16b opcodes.
More natural for FPGA is 18 b opcodes ?
- and perhaps a register frame pointer, so larger
Block Ram can be  better accessed ?.
 perhaps r0 ?, as allocating that to 0 for ALL opcodes
seems a tad wasteful.

 FPGAs give you very good 'free' ram resource, so the
best SoftCPUs start from RAM size, and work backwards.

-jg


Article: 139046
Subject: Re: Bullshit! - Re: Zero operand CPUs
From: hal-usenet@ip-64-139-1-69.sjc.megapath.net (Hal Murray)
Date: Thu, 19 Mar 2009 03:52:37 -0500
Links: << >>  << T >>  << A >>

>I am finding that none of this is truly obvious and may not always be
>true.  Real world testing and comparisons are in order, real tests
>that we can all see and understand...

Back in the 70s, Xerox had the Mesa world running on Altos.
It was a stack based architecture.  The goal was to reduce
code space.  (That was back before people figured out that
Moore's law was going to make code size not very interesting.)

Given the available technology of the time, it worked great.

In addition to the stack (I think it was 5 or 6 registers)
there was a PC, a module pointer for global variables, and a
frame pointer for this procedure context.

The opcodes were implemented in microcode rather than gates,
so there was a lot of flexibility in assigning values.

Calls were fancy, but the simple case allocated a frame
off the free list, setup the return link and such.

Most of the opcodes were loads.  It was a 16 bit system, but
there was a lot of support for 32 bit arithmetic and pointers.

Excpet in rare occasions when you were hacking on the system,
we didn't care about the details of the architecture.  Code
is code.  The basic ideas don't change because the architecture
changes.  You have loads, stores, loops, adds, muls...
  y = a*x+b
turns into (handwave)
  load a
  load x
  mul
  load b
  add
  store y

Some people call "load" push.  If a or b are constants,
the load might be a load immediate...

It might be a little weird if you wanted to write assembly code.
I think I'd get used to it if I had some good examples to learn
from.  (I've writted quite a bit of microcode back in the old
days and some Forth recently.)  If you have a good complier
you never think about that stuff.

-- 
These are my opinions, not necessarily my employer's.  I hate spam.


Article: 139047
Subject: Re: Documenting a simple CPU
From: Jonathan Bromley <jonathan.bromley@MYCOMPANY.com>
Date: Thu, 19 Mar 2009 09:00:15 +0000
Links: << >>  << T >>  << A >>
On Thu, 19 Mar 2009 01:14:31 -0700 (PDT), Jim Granville wrote:

>Err, with no assembler, how do you run the compiled cpu ?

Hand-coding.  Yup, really.  One of the interesting
benefits of a super-simple instruction set is that
this can be done, for small programs - and small
programs is all I've ever run on it (see below).

>Good docs, but missing is a resource report ?

Irrelevant in the target application (teaching
HDL syntax and techniques).  Around 450-500 
logic cells (4-LUT+FF) in typical FPGAs;
about 90MHz system clock rate; instructions
take between 3 and 7 system clocks to execute
if the APB-connected memory has no wait states.
The RTL implementation is pretty dumb, and
could easily be made much faster and tighter.
The only criterion for the present implementation
was that the RTL CPU should be synthesisable.

[snip sundry interesting comments]

But the purpose of this design was to create
a piece of Verilog code that does interesting
things, could be modified (specifically, have 
SystemVerilog language features grafted on), 
and was small enough for students to find 
the relevant bits easily in a 50-minute 
lab session.  I'm actually working on a
real-world version, for my own amusement,
but Ye Olde Original does what it aimed 
to do and I don't plan on fixing it :-)
-- 
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.

Article: 139048
Subject: Re: Documenting a simple CPU
From: Jonathan Bromley <jonathan.bromley@MYCOMPANY.com>
Date: Thu, 19 Mar 2009 09:03:05 +0000
Links: << >>  << T >>  << A >>
On Thu, 19 Mar 2009 00:57:36 -0700 (PDT), rickman wrote:

>The logical shift left operation is not really needed
>unless I am mistaken.  Since the opcode is free to specify any
>combination of operands, you can use the same value for operand A and
>B with an ADD operation which will result in a left shift.  Have any
>of the students noticed this?

No, but in fairness they would not be expected to;
the design is used on language and verification courses,
and the CPU architecture is only there to get a wide
range of interesting behaviours from a small design.

There is an embarrassingly large amount of redundancy 
and overlap in the instruction set.  I'm working on it :-)
-- 
Jonathan Bromley, Consultant

DOULOS - Developing Design Know-how
VHDL * Verilog * SystemC * e * Perl * Tcl/Tk * Project Services

Doulos Ltd., 22 Market Place, Ringwood, BH24 1AW, UK
jonathan.bromley@MYCOMPANY.com
http://www.MYCOMPANY.com

The contents of this message may contain personal views which 
are not the views of Doulos Ltd., unless specifically stated.

Article: 139049
Subject: Re: Documenting a simple CPU
From: -jg <Jim.Granville@gmail.com>
Date: Thu, 19 Mar 2009 02:36:17 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Mar 19, 9:00=A0pm, Jonathan Bromley <jonathan.brom...@MYCOMPANY.com>
wrote:
> On Thu, 19 Mar 2009 01:14:31 -0700 (PDT), Jim Granville wrote:
> >Err, with no assembler, how do you run the compiled cpu ?
>
> Hand-coding. =A0Yup, really. =A0One of the interesting
> benefits of a super-simple instruction set is that
> this can be done, for small programs - and small
> programs is all I've ever run on it (see below).
>
> >Good docs, but missing is a resource report ?
>
> Irrelevant in the target application (teaching
> HDL syntax and techniques).

Wow - now there's a surprising comment.

I'd hope any student taught HDL., would also understand
WHAT a resource report was, where to find it, and what
his HDL could be expected to use.

So you don't connect the students to the silicon at all ?

> =A0Around 450-500
> logic cells (4-LUT+FF) in typical FPGAs;
> about 90MHz system clock rate; instructions
> take between 3 and 7 system clocks to execute
> if the APB-connected memory has no wait states.
> The RTL implementation is pretty dumb, and
> could easily be made much faster and tighter.
> The only criterion for the present implementation
> was that the RTL CPU should be synthesisable.
>
> [snip sundry interesting comments]
>
> But the purpose of this design was to create
> a piece of Verilog code that does interesting
> things, could be modified (specifically, have
> SystemVerilog language features grafted on),
> and was small enough for students to find
> the relevant bits easily in a 50-minute
> lab session. =A0I'm actually working on a
> real-world version, for my own amusement,
> but Ye Olde Original does what it aimed
> to do and I don't plan on fixing it :-)

You could make that an exercise for the student ;)

-jg



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search