Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 146175

Article: 146175
Subject: Re: using an FPGA to emulate a vintage computer
From: Ahem A Rivet's Shot <steveo@eircom.net>
Date: Sun, 7 Mar 2010 14:30:20 +0000
Links: << >>  << T >>  << A >>
On Sun, 07 Mar 2010 07:48:01 -0500
Greg Menke <gusenet@comcast.net> wrote:

> 
> Ahem A Rivet's Shot <steveo@eircom.net> writes:
> 
> > On Sat, 6 Mar 2010 01:58:43 -0800 (PST)
> > Quadibloc <jsavard@ecn.ab.ca> wrote:
> >
> >> On Mar 5, 10:16 am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
> >> 
> >> >         No but x = *(y + 3) will store in x the contents of the
> >> > memory location at 3 + the value of y just as x = y[3] will and x = 3
> >> > [y] will, which is what I stated. You missed out the all important *
> >> > and ()s.
> >> 
> >> Intentionally. My point was that, while there is _some_ truth to the
> >> claim that C arrays tread rather lightly on the ground of hardware
> >> addressing, the claim that C doesn't have arrays at all, and the C
> >> array subscript operator does nothing at all but add two addresses
> >> together... is not *quite* true.
> >
> > 	The C subscript operator does do nothing other than adding two
> > numbers and dereferencing the result, that last action is rather
> > important. The validity of constructs like 2[a] and *(2+a) make this
> > clear - as does the equivalence of a and &(a[0]) or of *a and a[0]
> > where a is a pointer.
> 
> Yet when dereferencing arrays of rank >= 2, dimensions are automatically
> incorporated into the effective address, so its not quite equivalent to
> a simple addition of pointer and offset.

	There is a way to regard it as such - consider a[x][y] as being
equivalent to *(a[x] + y) where we regard a[x] as devolving into a pointer
to a row of the array. But yes multidimensional array support is a little
more involved than single dimensional array support. It's still not a
proper type though.

-- 
Steve O'Hara-Smith                          |   Directable Mirror Arrays
C:>WIN                                      | A better way to focus the sun
The computer obeys and wins.                |    licences available see
You lose and Bill collects.                 |    http://www.sohara.org/

Article: 146176
Subject: Re: using an FPGA to emulate a vintage computer
From: Ahem A Rivet's Shot <steveo@eircom.net>
Date: Sun, 7 Mar 2010 14:39:31 +0000
Links: << >>  << T >>  << A >>
On Sun, 7 Mar 2010 05:35:51 -0800 (PST)
Quadibloc <jsavard@ecn.ab.ca> wrote:

> On Mar 7, 1:45 am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
> > On Sat, 6 Mar 2010 02:01:30 -0800 (PST)
> >
> > Quadibloc <jsav...@ecn.ab.ca> wrote:
> > > The 2[a] syntax actually *works* in C the way it was described? I am
> > > astonished. I would expect it to yield the contents of the memory
> > > location a+&2 assuming that &2 can be persuaded to yield up the
> > > location where the value of the constant "2" is stored.
> >
> >         Yes of course it does - why else would I have mentioned it in my
> > first post in this threadlet ? a is a pointer, 2 is in a integer and 2
> > [a] is the same as a[2] is the same as *(a+2) and the rules for adding
> > pointers and integers are well defined in C. This is the heart of my
> > original point, array notation in C is syntactic sugar for pointer
> > arithmetic (and also for allocation which I neglected to mention in my
> > original post).
> 
> If a[2] was the same as *(a+2), then, indeed, since addition is

	Which it is by definition.

> commutative, it would make sense that 2[a], being the same as *(2+a),
> would be the same.
> 
> But a[2] is the same as *(&a+2) which is why I expected 2[a] to be the
> same as *(&2+a).

	No it's not - a in this context is treated as a pointer.

> Unless in *(a+2) "a" suddenly stops meaning what I would expect it to
> mean.

	Well I think you'll find a never die mean as what you would
expect it to mean. In many contexts including this one (and a[2] and 2[a]
and a + 2 or even (a+2)[3]) it is a pointer not an array - the only context
in which it is an array is the declaration.

> In that case, C has considerably more profound problems than not
> having arrays.

	C is consistent it's just that the array syntax really is a thin
mask for pointer arithmetic - even the multidimensional array syntax but
that's more fiddly.

-- 
Steve O'Hara-Smith                          |   Directable Mirror Arrays
C:>WIN                                      | A better way to focus the sun
The computer obeys and wins.                |    licences available see
You lose and Bill collects.                 |    http://www.sohara.org/

Article: 146177
Subject: Re: Virtex-4 driving a 5V CMOS
From: "ajv" <aj.veltkamp@gmail.com>
Date: Sun, 07 Mar 2010 08:56:22 -0600
Links: << >>  << T >>  << A >>
>
>> Hi all,
>>
>> I need to drive a 5V CMOS input from a 2.5V Virtex-4 bank. Is there  
>> anything
>> wrong with simply using a pullup to 5V? The speed doesn't matter.
>
>	This is not recommended practice... the FPGA's protection diodes aren't 

>going to be happy.
>	If you only have 1 signal, and you don't care if it's slow, you can just 

>use a SMD transistor to make an open collector/drain from your FPGA  
>output, or an HCT IC, or an HCT picogate... or even simpler replace your 

>5V CMOS IC by an HCT if possible.
>	If you need something more elaborate, there are zillions of voltage  
>translator chips...
>

WHAT EVER YOU DO, DO NOT PUT ANY VOLTAGE ON THE I/O LINES GREATER THAN
RECOMMENDED BY XILINX DATA SHEET!!!!!  I had a pin on a connector short out
to a I/O pin of a Virtex II and burned out the diode inside the package. 
Since speed isn't an issue, do one of two things, have the I/O pin go to an
open-buffer with it's output pulled up to 5-V, or many other manners of
connecting the pin to the tri-state pin of a open-drain buffer with the
input of the buffer to ground and the pulled up to 5-V.

	   
					
---------------------------------------		
Posted through http://www.FPGARelated.com

Article: 146178
Subject: Re: Laptop for FPGA design?
From: Michael S <already5chosen@yahoo.com>
Date: Sun, 7 Mar 2010 07:41:41 -0800 (PST)
Links: << >>  << T >>  << A >>
On Mar 7, 4:28 pm, General Schvantzkoph <schvantzk...@yahoo.com>
wrote:
> On Sun, 07 Mar 2010 06:11:07 -0800, Michael S wrote:
> > On Mar 7, 12:25 pm, John Adair <g...@enterpoint.co.uk> wrote:
> >> If you can get it the T9900 is better than T9800 but they are fairly
> >> rare with most companies seem to push the quad core instead.
>
> >> I have not got a mobile I7 yet but we do have desktop I7 and they have
> >> been very good.
>
> > Sure, desktop I7 are fast. With 8MB of cache and not so heavy reliance
> > on turbo-boost one can expect them being fast. On the other hand, 35W
> > mobile variants have 4MB or smaller cache and are critically dependent
> > on turbo-boost, since  relatively to mobile C2D their "normal" clock
> > frequency is slow. Still, it just my guts feeling, I never benchmarked
> > mobile i7 vs mobile C2D, so I could be wrong about their relative
> > merits.
>
> >> Laptops using the desktop I7 have been a definate no on battery
> >> lifetime of 1hr being typical but when I get the chance I will try the
> >> mobile I7 as it promises much. Parallel processors will be more use in
> >> a couple of years when tools have better use of them.
>
> >> On OS I think there are X64 drivers but I would only go that way if I
> >> had a really large design to deal with. Bugs and problems are far more
> >> common in X64 and Linux versions of the tools and with the relatively
> >> tiny user base bugs can take a while to surface and dare I say it get
> >> fixed. Life is busy enough without adding unnecessary problems.
>
> >> John Adair
> >> Enterpoint Ltd.
>
> > For the last year or so we do nearly all our FPGA development on
> > Ws2003/x64. So far, no problems. Even officially deprecated Rainbow (now
> > SafeNet)  USB Software Guards work fine. XP64 is derived from the same
> > code base.
> > We almost never use 64-bit tools, but very much appreciate the ability
> > to launch numerous instances of memory-hungry 32-bit tools. More a
> > matter of convenience than necessity? In single-user environment, yes.
> > But why should we give up convenience that costs so little?
>
> I have benchmarked Core2s vs iCore7s. 6M Cache iCore2s are faster on a
> clock for clock basis then the 8M Cache iCore7 when running NCVerilog.
> The iCore7 is a little faster on a clock for clock basis when running
> Xilinx place and route tools.

My experience with Altera tools (synthesis and p&r, never benchmarked
a simulation) is quite different.
i7-920 (8 MB/2.66 GHz) is like 1.5 times faster than E6750 (4 MB/2.66
GHz) and only marginally slower than E8400 (6MB/3.00 GHz). Taking into
account that the fastest non-extraordinary-expensive i7 variant
(i7-960, 3.2 GHz) runs at almost the same frequency as the fastest C2D
(E8600, 3.33 GHz) I'd say that in absolute terms core-i7 is faster.

> The cache architecture of the iCore7 sucks,
> it's a three level cache vs a two level cache on the Core2.

Nehalem's L2 cache is much smaller, yes,  but 1.5 times faster. Seem
like a fair trade-off.
Slower L1D cache (4 clocks instead of 3 clocks in C2D) sounds like a
bigger problem.

> Also there is
> less cache per processor on the iCore7 (2M) then the Core2 (3M) so the
> degradation in performance is greater.

Only when running multiple threads.
But we are talking about FPGA development, that is still mostly single-
threaded. Core-i7 has all 8MB available for a single thread. In C2D/
C2Q a single core has access to 6 MB.


>Finally the absolute clock rate
> for the Core2s is higher then it is for the iCore7, combine that with the
> faster clock for clock simulation performance and the Core2 is the clear
> winner for FPGA development.

Only when measured by price/performance.
In absolute sense i7-960 (or i7-975 for rich kids among us) should be
faster.
And, of course, in multi-user environment core-i7 (or, for bigger
shops, Xeon-55xx) wins by very wide margin.

Don't take me wrong, I'd very much like the CPU that combines cache
hierarchy of C2D with IMC, turbo-boost and fast unaligned access of
core-i7, but that's not going to happen. With AMD as weak as it is
right now we have no choice but to grab the whole packet that Intel
wants to sell us. And as a packet core-i7 is not bad, especially for
multi-user.




Article: 146179
Subject: Re: Laptop for FPGA design?
From: General Schvantzkoph <schvantzkoph@yahoo.com>
Date: 7 Mar 2010 17:03:18 GMT
Links: << >>  << T >>  << A >>
On Sun, 07 Mar 2010 07:41:41 -0800, Michael S wrote:

> On Mar 7, 4:28 pm, General Schvantzkoph <schvantzk...@yahoo.com> wrote:
>> On Sun, 07 Mar 2010 06:11:07 -0800, Michael S wrote:
>> > On Mar 7, 12:25 pm, John Adair <g...@enterpoint.co.uk> wrote:
>> >> If you can get it the T9900 is better than T9800 but they are fairly
>> >> rare with most companies seem to push the quad core instead.
>>
>> >> I have not got a mobile I7 yet but we do have desktop I7 and they
>> >> have been very good.
>>
>> > Sure, desktop I7 are fast. With 8MB of cache and not so heavy
>> > reliance on turbo-boost one can expect them being fast. On the other
>> > hand, 35W mobile variants have 4MB or smaller cache and are
>> > critically dependent on turbo-boost, since  relatively to mobile C2D
>> > their "normal" clock frequency is slow. Still, it just my guts
>> > feeling, I never benchmarked mobile i7 vs mobile C2D, so I could be
>> > wrong about their relative merits.
>>
>> >> Laptops using the desktop I7 have been a definate no on battery
>> >> lifetime of 1hr being typical but when I get the chance I will try
>> >> the mobile I7 as it promises much. Parallel processors will be more
>> >> use in a couple of years when tools have better use of them.
>>
>> >> On OS I think there are X64 drivers but I would only go that way if
>> >> I had a really large design to deal with. Bugs and problems are far
>> >> more common in X64 and Linux versions of the tools and with the
>> >> relatively tiny user base bugs can take a while to surface and dare
>> >> I say it get fixed. Life is busy enough without adding unnecessary
>> >> problems.
>>
>> >> John Adair
>> >> Enterpoint Ltd.
>>
>> > For the last year or so we do nearly all our FPGA development on
>> > Ws2003/x64. So far, no problems. Even officially deprecated Rainbow
>> > (now SafeNet)  USB Software Guards work fine. XP64 is derived from
>> > the same code base.
>> > We almost never use 64-bit tools, but very much appreciate the
>> > ability to launch numerous instances of memory-hungry 32-bit tools.
>> > More a matter of convenience than necessity? In single-user
>> > environment, yes. But why should we give up convenience that costs so
>> > little?
>>
>> I have benchmarked Core2s vs iCore7s. 6M Cache iCore2s are faster on a
>> clock for clock basis then the 8M Cache iCore7 when running NCVerilog.
>> The iCore7 is a little faster on a clock for clock basis when running
>> Xilinx place and route tools.
> 
> My experience with Altera tools (synthesis and p&r, never benchmarked a
> simulation) is quite different.
> i7-920 (8 MB/2.66 GHz) is like 1.5 times faster than E6750 (4 MB/2.66
> GHz) and only marginally slower than E8400 (6MB/3.00 GHz). Taking into
> account that the fastest non-extraordinary-expensive i7 variant (i7-960,
> 3.2 GHz) runs at almost the same frequency as the fastest C2D (E8600,
> 3.33 GHz) I'd say that in absolute terms core-i7 is faster.
> 
>> The cache architecture of the iCore7 sucks, it's a three level cache vs
>> a two level cache on the Core2.
> 
> Nehalem's L2 cache is much smaller, yes,  but 1.5 times faster. Seem
> like a fair trade-off.
> Slower L1D cache (4 clocks instead of 3 clocks in C2D) sounds like a
> bigger problem.
> 
>> Also there is
>> less cache per processor on the iCore7 (2M) then the Core2 (3M) so the
>> degradation in performance is greater.
> 
> Only when running multiple threads.
> But we are talking about FPGA development, that is still mostly single-
> threaded. Core-i7 has all 8MB available for a single thread. In C2D/ C2Q
> a single core has access to 6 MB.
> 
> 
>>Finally the absolute clock rate
>> for the Core2s is higher then it is for the iCore7, combine that with
>> the faster clock for clock simulation performance and the Core2 is the
>> clear winner for FPGA development.
> 
> Only when measured by price/performance. In absolute sense i7-960 (or
> i7-975 for rich kids among us) should be faster.
> And, of course, in multi-user environment core-i7 (or, for bigger shops,
> Xeon-55xx) wins by very wide margin.
> 
> Don't take me wrong, I'd very much like the CPU that combines cache
> hierarchy of C2D with IMC, turbo-boost and fast unaligned access of
> core-i7, but that's not going to happen. With AMD as weak as it is right
> now we have no choice but to grab the whole packet that Intel wants to
> sell us. And as a packet core-i7 is not bad, especially for multi-user.

Simulation performance is much more important than place and route 
performance. The iCore7 is a little faster then the Core2 when doing 
place and routes, but the 6M Core2 wins hands down when doing simulations 
(4M Core2s are much slower then 6M Core2s). I spend 100X as much time 
doing simulations as I spend doing place and routes so the small 
advantage that iCore7s have doing synthesis/place and routes is dwarfed 
by the simulation advantage that the Core2 has.

Article: 146180
Subject: Re: using an FPGA to emulate a vintage computer
From: Quadibloc <jsavard@ecn.ab.ca>
Date: Sun, 7 Mar 2010 09:54:04 -0800 (PST)
Links: << >>  << T >>  << A >>
On Mar 7, 7:39=A0am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:

> =A0 =A0 =A0 =A0 Well I think you'll find a never die mean as what you wou=
ld
> expect it to mean. In many contexts including this one (and a[2] and 2[a]
> and a + 2 or even (a+2)[3]) it is a pointer not an array - the only conte=
xt
> in which it is an array is the declaration.

Yes: the array name always refers to the pointer, and the subscript is
required to reference, not just to displace. That was my mistake; a
never did mean the "array" in the abstract sense... so that a "new,
improved" C copying Fortran 90 (or PL/I) could have statements like

int a[5],b[5] ;
...
b =3D a + 2 ;

and the compiler just makes

int a[5],b[5]
...
for (i00001 =3D 0; i++; i<5 )
 { b[i00001] =3D a[i00001] + 2
 } ;

out of it.

John Savard

Article: 146181
Subject: Re: using an FPGA to emulate a vintage computer
From: Ahem A Rivet's Shot <steveo@eircom.net>
Date: Sun, 7 Mar 2010 18:09:22 +0000
Links: << >>  << T >>  << A >>
On Sun, 7 Mar 2010 09:54:04 -0800 (PST)
Quadibloc <jsavard@ecn.ab.ca> wrote:

> On Mar 7, 7:39 am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
> 
> >         Well I think you'll find a never die mean as what you would
> > expect it to mean. In many contexts including this one (and a[2] and 2
> > [a] and a + 2 or even (a+2)[3]) it is a pointer not an array - the only
> > context in which it is an array is the declaration.
> 
> Yes: the array name always refers to the pointer, and the subscript is
> required to reference, not just to displace. That was my mistake; a
> never did mean the "array" in the abstract sense... so that a "new,
> improved" C copying Fortran 90 (or PL/I) could have statements like
> 
> int a[5],b[5] ;
> ...
> b = a + 2 ;
> 
> and the compiler just makes
> 
> int a[5],b[5]
> ...
> for (i00001 = 0; i++; i<5 )
>  { b[i00001] = a[i00001] + 2
>  } ;

	Yes that would be nice, especially if it extended to full blown matrix
artihmetic.

-- 
Steve O'Hara-Smith                          |   Directable Mirror Arrays
C:>WIN                                      | A better way to focus the sun
The computer obeys and wins.                |    licences available see
You lose and Bill collects.                 |    http://www.sohara.org/

Article: 146182
Subject: Re: using an FPGA to emulate a vintage computer
From: johnf@panix.com (John Francis)
Date: Sun, 7 Mar 2010 18:59:43 +0000 (UTC)
Links: << >>  << T >>  << A >>
In article <20100307143020.fcc7e3df.steveo@eircom.net>,
Ahem A Rivet's Shot  <steveo@eircom.net> wrote:
>On Sun, 07 Mar 2010 07:48:01 -0500
>Greg Menke <gusenet@comcast.net> wrote:
>
>> 
>> Ahem A Rivet's Shot <steveo@eircom.net> writes:
>> > 	The C subscript operator does do nothing other than adding two
>> > numbers and dereferencing the result, that last action is rather
>> > important. The validity of constructs like 2[a] and *(2+a) make this
>> > clear - as does the equivalence of a and &(a[0]) or of *a and a[0]
>> > where a is a pointer.
>> 
>> Yet when dereferencing arrays of rank >= 2, dimensions are automatically
>> incorporated into the effective address, so its not quite equivalent to
>> a simple addition of pointer and offset.
>
>	There is a way to regard it as such - consider a[x][y] as being
>equivalent to *(a[x] + y) where we regard a[x] as devolving into a pointer
>to a row of the array. But yes multidimensional array support is a little
>more involved than single dimensional array support. It's still not a
>proper type though.

That's all very well, but in fact no C implementation of which I am
aware uses dope vectors when allocating multidimensional arrays. (I
have come across the practice in other languages).  In fact C has to
perform different calculations to evaluate the address of an element
a[i][j], depending on how a was defined (int a[4][5], or int** a).
The sizeof operator also knows something about array types.



Article: 146183
Subject: Re: using an FPGA to emulate a vintage computer
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Sun, 7 Mar 2010 20:13:29 +0000 (UTC)
Links: << >>  << T >>  << A >>
In comp.arch.fpga John Francis <johnf@panix.com> wrote:
(snip)
 
> That's all very well, but in fact no C implementation of which I am
> aware uses dope vectors when allocating multidimensional arrays. (I
> have come across the practice in other languages).  In fact C has to
> perform different calculations to evaluate the address of an element
> a[i][j], depending on how a was defined (int a[4][5], or int** a).
> The sizeof operator also knows something about array types.

VMS compilers are supposed to allow for value, reference, or 
descriptor argument passing to support interlanguage calls.
The %val(), %ref(), and %descr() syntax is supposed to be
supported by all compilers.

-- glen

Article: 146184
Subject: Re: using an FPGA to emulate a vintage computer
From: Quadibloc <jsavard@ecn.ab.ca>
Date: Sun, 7 Mar 2010 13:16:56 -0800 (PST)
Links: << >>  << T >>  << A >>
On Mar 7, 11:09=A0am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
> On Sun, 7 Mar 2010 09:54:04 -0800 (PST)
>
>
>
> Quadibloc <jsav...@ecn.ab.ca> wrote:
> > On Mar 7, 7:39=A0am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
>
> > > =A0 =A0 =A0 =A0 Well I think you'll find a never die mean as what you=
 would
> > > expect it to mean. In many contexts including this one (and a[2] and =
2
> > > [a] and a + 2 or even (a+2)[3]) it is a pointer not an array - the on=
ly
> > > context in which it is an array is the declaration.
>
> > Yes: the array name always refers to the pointer, and the subscript is
> > required to reference, not just to displace. That was my mistake; a
> > never did mean the "array" in the abstract sense... so that a "new,
> > improved" C copying Fortran 90 (or PL/I) could have statements like
>
> > int a[5],b[5] ;
> > ...
> > b =3D a + 2 ;
>
> > and the compiler just makes
>
> > int a[5],b[5]
> > ...
> > for (i00001 =3D 0; i++; i<5 )
> > =A0{ b[i00001] =3D a[i00001] + 2
> > =A0} ;
>
> =A0 =A0 =A0 =A0 Yes that would be nice, especially if it extended to full=
 blown matrix
> artihmetic.

To avoid confusion, it is probably better if multiplying two two-
dimensional arrays produces element-by-element multiplication - and,
to get matrix multiplication, you need to declare variables having a
MATRIX type, analogous to the COMPLEX type.

John Savard

Article: 146185
Subject: Re: using an FPGA to emulate a vintage computer
From: Ahem A Rivet's Shot <steveo@eircom.net>
Date: Sun, 7 Mar 2010 21:40:04 +0000
Links: << >>  << T >>  << A >>
On Sun, 7 Mar 2010 18:59:43 +0000 (UTC)
johnf@panix.com (John Francis) wrote:

> In article <20100307143020.fcc7e3df.steveo@eircom.net>,
> Ahem A Rivet's Shot  <steveo@eircom.net> wrote:
> >On Sun, 07 Mar 2010 07:48:01 -0500
> >Greg Menke <gusenet@comcast.net> wrote:
> >
> >> 
> >> Ahem A Rivet's Shot <steveo@eircom.net> writes:
> >> > 	The C subscript operator does do nothing other than adding
> >> > two numbers and dereferencing the result, that last action is rather
> >> > important. The validity of constructs like 2[a] and *(2+a) make this
> >> > clear - as does the equivalence of a and &(a[0]) or of *a and a[0]
> >> > where a is a pointer.
> >> 
> >> Yet when dereferencing arrays of rank >= 2, dimensions are
> >> automatically incorporated into the effective address, so its not
> >> quite equivalent to a simple addition of pointer and offset.
> >
> >	There is a way to regard it as such - consider a[x][y] as being
> >equivalent to *(a[x] + y) where we regard a[x] as devolving into a
> >pointer to a row of the array. But yes multidimensional array support is
> >a little more involved than single dimensional array support. It's still
> >not a proper type though.
> 
> That's all very well, but in fact no C implementation of which I am
> aware uses dope vectors when allocating multidimensional arrays. (I

	Indeed they don't - it is simply a matter of how you interpret the
partial construct a[x] when a is declared as a two dimensional array - one
way of interpreting it is as a pointer to an array row even though it is
not a valid construct on it's own.

	There is a clear extension of the one dimentsional case a declaration
int a[5] leaves future references to a as being equivalent to &(a[0]) so
it is reasonable to regard a declaration int a[4][5] as leaving future
references like a[i] as equivalent to &(a[i][0]).

> have come across the practice in other languages).  In fact C has to
> perform different calculations to evaluate the address of an element
> a[i][j], depending on how a was defined (int a[4][5], or int** a).
> The sizeof operator also knows something about array types.

	If a is defined as int **a then a[i][j] is not valid at all.

-- 
Steve O'Hara-Smith                          |   Directable Mirror Arrays
C:>WIN                                      | A better way to focus the sun
The computer obeys and wins.                |    licences available see
You lose and Bill collects.                 |    http://www.sohara.org/

Article: 146186
Subject: Re: using an FPGA to emulate a vintage computer
From: Peter Flass <Peter_Flass@Yahoo.com>
Date: Sun, 07 Mar 2010 17:30:00 -0500
Links: << >>  << T >>  << A >>
Ahem A Rivet's Shot wrote:
> On Sun, 7 Mar 2010 09:54:04 -0800 (PST)
> Quadibloc <jsavard@ecn.ab.ca> wrote:
> 
>> On Mar 7, 7:39 am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
>>
>>>         Well I think you'll find a never die mean as what you would
>>> expect it to mean. In many contexts including this one (and a[2] and 2
>>> [a] and a + 2 or even (a+2)[3]) it is a pointer not an array - the only
>>> context in which it is an array is the declaration.
>> Yes: the array name always refers to the pointer, and the subscript is
>> required to reference, not just to displace. That was my mistake; a
>> never did mean the "array" in the abstract sense... so that a "new,
>> improved" C copying Fortran 90 (or PL/I) could have statements like
>>
>> int a[5],b[5] ;
>> ...
>> b = a + 2 ;
>>
>> and the compiler just makes
>>
>> int a[5],b[5]
>> ...
>> for (i00001 = 0; i++; i<5 )
>>  { b[i00001] = a[i00001] + 2
>>  } ;
> 
> 	Yes that would be nice, especially if it extended to full blown matrix
> artihmetic.
> 

Sure, just use PL/I ;-)

Article: 146187
Subject: Re: using an FPGA to emulate a vintage computer
From: Joe Pfeiffer <pfeiffer@cs.nmsu.edu>
Date: Sun, 07 Mar 2010 19:07:30 -0700
Links: << >>  << T >>  << A >>
johnf@panix.com (John Francis) writes:

> In article <20100307143020.fcc7e3df.steveo@eircom.net>,
> Ahem A Rivet's Shot  <steveo@eircom.net> wrote:
>>On Sun, 07 Mar 2010 07:48:01 -0500
>>Greg Menke <gusenet@comcast.net> wrote:
>>
>>> 
>>> Ahem A Rivet's Shot <steveo@eircom.net> writes:
>>> > 	The C subscript operator does do nothing other than adding two
>>> > numbers and dereferencing the result, that last action is rather
>>> > important. The validity of constructs like 2[a] and *(2+a) make this
>>> > clear - as does the equivalence of a and &(a[0]) or of *a and a[0]
>>> > where a is a pointer.
>>> 
>>> Yet when dereferencing arrays of rank >= 2, dimensions are automatically
>>> incorporated into the effective address, so its not quite equivalent to
>>> a simple addition of pointer and offset.
>>
>>	There is a way to regard it as such - consider a[x][y] as being
>>equivalent to *(a[x] + y) where we regard a[x] as devolving into a pointer
>>to a row of the array. But yes multidimensional array support is a little
>>more involved than single dimensional array support. It's still not a
>>proper type though.
>
> That's all very well, but in fact no C implementation of which I am
> aware uses dope vectors when allocating multidimensional arrays. (I
> have come across the practice in other languages).  In fact C has to
> perform different calculations to evaluate the address of an element
> a[i][j], depending on how a was defined (int a[4][5], or int** a).
> The sizeof operator also knows something about array types.

"Regard" is a key word there -- the syntax shown ought to work whether
it's actually a dope vector (I assume you mean the same thing I learned
about under the name "Iliffe vectors") or not.  Haven't had a chance to
try it....
-- 
As we enjoy great advantages from the inventions of others, we should
be glad of an opportunity to serve others by any invention of ours;
and this we should do freely and generously. (Benjamin Franklin)

Article: 146188
Subject: Re: using an FPGA to emulate a vintage computer
From: Charles Richmond <frizzle@tx.rr.com>
Date: Sun, 07 Mar 2010 20:08:27 -0600
Links: << >>  << T >>  << A >>
Uwe Klo wrote:
> Quadibloc schrieb:
>> On Mar 5, 12:44 pm, Joe Pfeiffer <pfeif...@cs.nmsu.edu> wrote:
>>> #include <stdio.h>
>>> int main()
>>> {
>>>     int a[4];
>>>
>>>     printf("a[2] at 0x%8x\n", &(a[2]));
>>>     printf("2[a] at 0x%8x\n", &(2[a]));
>>>     printf("(a+2) is 0x%8x\n", a+2);
>>>     printf("(2+a) is 0x%8x\n", 2+a);
>>>
>>> }
>>>
>>> [pfeiffer@snowball ~/temp]# ./awry
>>> a[2] at 0xbfff97b8
>>> 2[a] at 0xbfff97b8
>>> (a+2) is 0xbfff97b8
>>> (2+a) is 0xbfff97b8
>> The 2[a] syntax actually *works* in C the way it was described? I am
>> astonished. I would expect it to yield the contents of the memory
>> location a+&2 assuming that &2 can be persuaded to yield up the
>> location where the value of the constant "2" is stored.
> 
> You can think of the "a" in "a[4]" as a named numerical (integer)
> constant (alias), giving the address of the memory block that was
> allocated by the definition.
> 
> So there is no difference, between using that (named) constant or an
> explicit numerical constant.
> 
> The only differences between:
>    (1)   int a[4];
> and:
>    (2)   int * a = malloc( 4 * sizeof(int));
> is the place where the memory is allocated and the value in (2) may be
> changed later. (And the amount of typing, ofcourse!)
> 
> In both cases you can use "a[1]" or "*(a+1)" for access.
> 

Yes, you *can* use "a[1]" or "*(a+1)" for access. And you can 
think of "a" as a "named numerical (integer) constant", except 
that array "a" also has an implied length that is used to scale 
whatever integer is added to the address in "a".

-- 
+----------------------------------------+
|     Charles and Francis Richmond       |
|                                        |
|  plano dot net at aquaporin4 dot com   |
+----------------------------------------+

Article: 146189
Subject: Re: using an FPGA to emulate a vintage computer
From: Charles Richmond <frizzle@tx.rr.com>
Date: Sun, 07 Mar 2010 20:11:53 -0600
Links: << >>  << T >>  << A >>
Ahem A Rivet's Shot wrote:
> On Sat, 6 Mar 2010 01:58:43 -0800 (PST)
> Quadibloc <jsavard@ecn.ab.ca> wrote:
> 
>> On Mar 5, 10:16 am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
>>
>>>         No but x = *(y + 3) will store in x the contents of the memory
>>> location at 3 + the value of y just as x = y[3] will and x = 3[y] will,
>>> which is what I stated. You missed out the all important * and ()s.
>> Intentionally. My point was that, while there is _some_ truth to the
>> claim that C arrays tread rather lightly on the ground of hardware
>> addressing, the claim that C doesn't have arrays at all, and the C
>> array subscript operator does nothing at all but add two addresses
>> together... is not *quite* true.
> 
> 	The C subscript operator does do nothing other than adding two
> numbers and dereferencing the result, that last action is rather important.
> The validity of constructs like 2[a] and *(2+a) make this clear - as does
> the equivalence of a and &(a[0]) or of *a and a[0] where a is a pointer.
> 
> 	C does have good support for pointers and adding integers to
> pointers and for declaring blocks of storage with an array like syntax.
> 

... but do *not* forget that when an integer is added to a 
pointer, that integer is "scaled" by the length associated with 
that pointer. So if "a" is a pointer to a four byte integer, then 
"a+1" actually adds *four* to the pointer. The integer "1" is 
scaled by the length of the object pointed to by "a".

-- 
+----------------------------------------+
|     Charles and Francis Richmond       |
|                                        |
|  plano dot net at aquaporin4 dot com   |
+----------------------------------------+

Article: 146190
Subject: Re: using an FPGA to emulate a vintage computer
From: Joe Pfeiffer <pfeiffer@cs.nmsu.edu>
Date: Sun, 07 Mar 2010 20:17:14 -0700
Links: << >>  << T >>  << A >>
Charles Richmond <frizzle@tx.rr.com> writes:
>
> ... but do *not* forget that when an integer is added to a pointer,
> that integer is "scaled" by the length associated with that
> pointer. So if "a" is a pointer to a four byte integer, then "a+1"
> actually adds *four* to the pointer. The integer "1" is scaled by the
> length of the object pointed to by "a".

That fact took me several painful days to learn.  I had (in a project I
don't remember, for reasons I don't remember) used the + syntax
dereferencing a buffer of integers, and had scaled it myself.  Which
meant, of course, that everything seemed fine for a ways into the
buffer, then mysteriously segfaulted.

Hmmm...  I've got quite a few like that, with vividly remembered bugs in
totally forgotten projects.
-- 
As we enjoy great advantages from the inventions of others, we should
be glad of an opportunity to serve others by any invention of ours;
and this we should do freely and generously. (Benjamin Franklin)

Article: 146191
Subject: Re: Looking for a USB JTAG cable
From: GrizzlySteve <sbattazzo@gmail.com>
Date: Sun, 7 Mar 2010 19:17:39 -0800 (PST)
Links: << >>  << T >>  << A >>
The original digilent USB cable only works with Digilent Adept
software. Somebody had supposedly reverse engineered the protocol for
some digilent boards with that programmer onboard and set it up to
work with an open source programming tool on Linux, but I had
absolutely no success with that when I tried it with the original
nexys board.

The "XUP USB-JTAG Programming Cable" sold by Digilent works like a
charm, but it is only available to academic customers.. and I see the
price has gone up by almost $30 since the time when I ordered mine,
but it's still much cheaper than the official Xilinx cable.

Apparently, you could alternatively use FTDI-chip 2232 as a
replacement for USB-JTAG (there's some convenient info on that here:
http://www.rcs.uncc.edu/wiki/index.php/Lab_Notes#Software_Setup ) and
I think there are some pretty cheap 2232 breakout boards that can be
had.. it won't be faster than the parallel cable, but at least you can
run it on your laptop with no problems.

Article: 146192
Subject: Re: using an FPGA to emulate a vintage computer
From: Ahem A Rivet's Shot <steveo@eircom.net>
Date: Mon, 8 Mar 2010 07:39:31 +0000
Links: << >>  << T >>  << A >>
On Sun, 07 Mar 2010 20:11:53 -0600
Charles Richmond <frizzle@tx.rr.com> wrote:

> Ahem A Rivet's Shot wrote:
> > On Sat, 6 Mar 2010 01:58:43 -0800 (PST)
> > Quadibloc <jsavard@ecn.ab.ca> wrote:
> > 
> >> On Mar 5, 10:16 am, Ahem A Rivet's Shot <ste...@eircom.net> wrote:
> >>
> >>>         No but x = *(y + 3) will store in x the contents of the memory
> >>> location at 3 + the value of y just as x = y[3] will and x = 3[y]
> >>> will, which is what I stated. You missed out the all important * and
> >>> ()s.
> >> Intentionally. My point was that, while there is _some_ truth to the
> >> claim that C arrays tread rather lightly on the ground of hardware
> >> addressing, the claim that C doesn't have arrays at all, and the C
> >> array subscript operator does nothing at all but add two addresses
> >> together... is not *quite* true.
> > 
> > 	The C subscript operator does do nothing other than adding two
> > numbers and dereferencing the result, that last action is rather
> > important. The validity of constructs like 2[a] and *(2+a) make this
> > clear - as does the equivalence of a and &(a[0]) or of *a and a[0]
> > where a is a pointer.
> > 
> > 	C does have good support for pointers and adding integers to
> > pointers and for declaring blocks of storage with an array like syntax.
> > 
> 
> ... but do *not* forget that when an integer is added to a 
> pointer, that integer is "scaled" by the length associated with 
> that pointer.

	Yep *good* support for adding integers to pointers.

-- 
Steve O'Hara-Smith                          |   Directable Mirror Arrays
C:>WIN                                      | A better way to focus the sun
The computer obeys and wins.                |    licences available see
You lose and Bill collects.                 |    http://www.sohara.org/

Article: 146193
Subject: Re: using an FPGA to emulate a vintage computer
From: johnf@panix.com (John Francis)
Date: Mon, 8 Mar 2010 07:45:53 +0000 (UTC)
Links: << >>  << T >>  << A >>
In article <20100307214004.5b5fc8b4.steveo@eircom.net>,
Ahem A Rivet's Shot  <steveo@eircom.net> wrote:
>On Sun, 7 Mar 2010 18:59:43 +0000 (UTC)
>johnf@panix.com (John Francis) wrote:
>
>> have come across the practice in other languages).  In fact C has to
>> perform different calculations to evaluate the address of an element
>> a[i][j], depending on how a was defined (int a[4][5], or int** a).
>> The sizeof operator also knows something about array types.
>
>	If a is defined as int **a then a[i][j] is not valid at all.

Rubbish.  a[i][j] is a perfectly legal term in both cases I supplied,
and has a well-defined way of calculating the address.



Article: 146194
Subject: Re: using an FPGA to emulate a vintage computer
From: Ahem A Rivet's Shot <steveo@eircom.net>
Date: Mon, 8 Mar 2010 09:43:42 +0000
Links: << >>  << T >>  << A >>
On Mon, 8 Mar 2010 07:45:53 +0000 (UTC)
johnf@panix.com (John Francis) wrote:

> In article <20100307214004.5b5fc8b4.steveo@eircom.net>,
> Ahem A Rivet's Shot  <steveo@eircom.net> wrote:
> >On Sun, 7 Mar 2010 18:59:43 +0000 (UTC)
> >johnf@panix.com (John Francis) wrote:
> >
> >> have come across the practice in other languages).  In fact C has to
> >> perform different calculations to evaluate the address of an element
> >> a[i][j], depending on how a was defined (int a[4][5], or int** a).
> >> The sizeof operator also knows something about array types.
> >
> >	If a is defined as int **a then a[i][j] is not valid at all.
> 
> Rubbish.  a[i][j] is a perfectly legal term in both cases I supplied,
> and has a well-defined way of calculating the address.

	Er OK - you're right int **a is a pointer to a pointer to integers
so the first offset works in terms of sizeof(int *) and the second in terms
of sizeof(int).

-- 
Steve O'Hara-Smith                          |   Directable Mirror Arrays
C:>WIN                                      | A better way to focus the sun
The computer obeys and wins.                |    licences available see
You lose and Bill collects.                 |    http://www.sohara.org/

Article: 146195
Subject: Re: Modelsim PE vs. Aldec Active-HDL (PE)
From: Martin Thompson <martin.j.thompson@trw.com>
Date: Mon, 08 Mar 2010 11:53:52 +0000
Links: << >>  << T >>  << A >>
rickman <gnuarm@gmail.com> writes:

> I find the GUI will save me a lot of typing when instantiating
> modules.  I use the "generate test bench" feature to build a file
> with the meat and potatoes in it and I copy that to the higher level
> module.

Ahh, I use VHDL-mode in Emacs for that, which is why I haven't missed
it :)


-- 
martin.j.thompson@trw.com 
TRW Conekt - Consultancy in Engineering, Knowledge and Technology
http://www.conekt.net/electronics.html

Article: 146196
Subject: Re: Modelsim PE vs. Aldec Active-HDL (PE)
From: Martin Thompson <martin.j.thompson@trw.com>
Date: Mon, 08 Mar 2010 12:04:31 +0000
Links: << >>  << T >>  << A >>
KJ <kkjennings@sbcglobal.net> writes:

> On Mar 5, 5:34am, Martin Thompson <martin.j.thomp...@trw.com> wrote:
>>
>> Am I the only one that makes *no* use of the various "project things"
>> (either in Modelsim or Aldec)? I just have a makefile and use the GUI
>> to run the sim (from "their" command-line) and show me the waveforms.
>> I guess I don't like to be tied to a tool (as much as I can manage)
>> much as I don't like to be tied to a particular silicon vendor (as
>> much as I can manage :)
>>
>
> But you're also running *their* commands to compile, run and view so
> you're not really any more independent.  

This is true, but the "porting" can be done once and pushed into
scripts.  Porting my "muscle-memory" is a lot harder, if the buttons
to click move around :)

Waveform viewing is still an issue, as that will likely change the
most, but I spend a lot less time doing that than most other tasks.
Certainly, the differences between two tools didn't pain me much when
I was trying two in parallel.

> Maintaining make files can be a chore also, unless you use something
> to help you manage it...but then you're now dependent on that tool
> as well.

Emacs.  I don't mind being dependent on that so much.

>
>> Am I missing something valuable, or is it just different?
>>
> Probably depends on which scenario is more likely to occur
> 1. Change sim tools

Or using a variety all the time... I'd like to do more experimentation
and comparison, esp. of the open source tools.

> 2. Add new developers (temporary, or because you move on to something
> else in the company)
>
> If #1 is prevalent, then maybe using other tools to help you manage
> 'make' is better.  If #2 is more prevalent, then using the tool's
> project system is probably better in easing the transition.  

I guess that's a point in its favour (assuming I can't "convert" the
incomers to Emacs :)

> If neither is particularly likely...well...then it probably doesn't
> much matter since one can probably be just as productive with
> various approaches.

Which is probably why we have lots of approaches - dofferent strokes
and all that!

Cheers,
Martin

-- 
martin.j.thompson@trw.com 
TRW Conekt - Consultancy in Engineering, Knowledge and Technology
http://www.conekt.net/electronics.html

Article: 146197
Subject: Using the SignalTap Logic Analyzer
From: pinkisntwell <pinkisntwell@gmail.com>
Date: Mon, 8 Mar 2010 04:40:17 -0800 (PST)
Links: << >>  << T >>  << A >>
I'm using Quartus and I'm trying to compile a SignalTap Logic Analyzer
file with my project. No matter what I do the only indication I get is
"Please compile the project to continue". I have tried compiling and
recompiling to no avail. Any help?

Article: 146198
Subject: Some Active-HDL questions
From: "Pete Fraser" <pfraser@covad.net>
Date: Mon, 8 Mar 2010 07:23:47 -0800
Links: << >>  << T >>  << A >>
I have an evaluation copy of Active-HDL, and am having
some (presumably) newbie issues with it.

I went through their VHDL tutorial, but it has all sorts
of visual editors in the flow that I'm not interested in.
I tried importing my Modelsim XE project, and that
sort of worked, but it didn't convert my "do" file.

Could anybody point me to a simple "do" file that will
compile a vhdl test bench, the UUT and a few supporting
files, open a waveform window and add the signal
configuration to the window, fire up the sim, and run
for a period specified in the file. Can I do this without
messing about with workspaces and projects?

I really like the looks of the interface, and the speed,
but I seem to have a minor issue with analog displays.
I can select a single bus, and allow the software to
determine the range for analog display, but when I try
doing this on multiple busses, the software comes up
with a ridiculously high gain and clips the waveforms.
It does this even if all the busses have the same range.

Any suggestions?

Also, is there a more appropriate forum to ask these
sorts of questions? I couldn't find an Active-HDL forum.
I'll try phoning the FAE, but I thought I'd get a head
start by asking here.

Thanks

Pete




Article: 146199
Subject: Re: Some Active-HDL questions
From: Muzaffer Kal <kal@dspia.com>
Date: Mon, 08 Mar 2010 07:43:35 -0800
Links: << >>  << T >>  << A >>
On Mon, 8 Mar 2010 07:23:47 -0800, "Pete Fraser" <pfraser@covad.net>
wrote:

>I have an evaluation copy of Active-HDL, and am having
>some (presumably) newbie issues with it.
>
>I went through their VHDL tutorial, but it has all sorts
>of visual editors in the flow that I'm not interested in.
>I tried importing my Modelsim XE project, and that
>sort of worked, but it didn't convert my "do" file.
>
>Could anybody point me to a simple "do" file that will
>compile a vhdl test bench, the UUT and a few supporting
>files, open a waveform window and add the signal
>configuration to the window, fire up the sim, and run
>for a period specified in the file. Can I do this without
>messing about with workspaces and projects?
>
Here is a small example, not tested:
---------------------------------------------
set run_time 10us
vlib work
vmap work work
vcom -93 foo.vhd
vsim -t 1ps work.foo
add wave -radix Hexadecimal sim:/foo/*
run $run_time
-----------------------------------------------

For all of this to work, you need a blank workspace file. Last time I
did this, I created an empty workspace and copied it to the new
directory after which you can just use the command-line.

-- 
Muzaffer Kal

DSPIA INC.
ASIC/FPGA Design Services

http://www.dspia.com



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search