Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 161425

Article: 161425
Subject: Re: Why differences between Merly-type and Moore-type clock-gated
From: KJ <kkjennings@sbcglobal.net>
Date: Fri, 9 Aug 2019 20:17:31 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Friday, August 9, 2019 at 4:53:12 PM UTC-4, Weng Tianxiang wrote:
> Why differences between Merly-type and Moore-type clock-gated state machi=
nes are important on how to stop clocking?
Refer to my post on comp.lang.vhdl at https://groups.google.com/forum/#!top=
ic/comp.lang.vhdl/E4YvtRSYTdU where I explain the error of the writer's way=
s (Hint:  The writers didn't really have a Mealy state machine in the first=
 place...at least not in Figure 1)

Kevin Jennings

Article: 161426
Subject: Bayer Pattern to RGB VHDL CODE
From: abirov@gmail.com
Date: Sun, 11 Aug 2019 00:38:00 -0700 (PDT)
Links: << >>  << T >>  << A >>
  Please VHDL Guru's i got 8 rows Bayer signal from MT9 parallel data camera , could someone help me or share VHDL code to convert from Bayer pattern to RGB ? 

Article: 161427
Subject: Re: Bayer Pattern to RGB VHDL CODE
From: abirov@gmail.com
Date: Sun, 11 Aug 2019 00:45:28 -0700 (PDT)
Links: << >>  << T >>  << A >>
Here is many information, that need to use Matlab, so i dont know how to use Matlab in this purpose, could someone explain for very new user of matlab ?


Article: 161428
Subject: Re: VHDL TIME support in Vivado
From: Allan Herriman <allanherriman@hotmail.com>
Date: Sun, 11 Aug 2019 03:46:26 -0500
Links: << >>  << T >>  << A >>
On Fri, 09 Aug 2019 11:42:10 -0700, Rick C wrote:

> On Friday, August 9, 2019 at 1:49:59 PM UTC-4, Rob Gaddi wrote:
>> Y'all.  It's 2019.  TIME has been in VHDL since what, 1987?  And yet
>> Vivado remains unable to successfully divide an amount of time you want
>> to wait by a clock period to get a compile-time integer.
>> 
>> https://www.xilinx.com/support/answers/57964.html is from 2014.  Five
>> years.  In five years, Xilinx has remained unable to perform simple
>> division.  Absolutely embarrassing.
> 
> Obviously Xilinx customers aren't asking for it.  After all, Xilinx is
> very responsive to their customers, no?


I can't tell whether you're being sarcastic.  Xilinx /should/ be 
responsive to their customers.  My experience has been that they're 
typically the opposite of responsive, when it comes to bugs or (lack of) 
HDL support in the tools.

I've found two synthesis bugs in Vivado in the past few weeks (one 
Verilog regarding incorrect macro expansion, the other VHDL where it will 
sometimes think a large 2D array of std_logic is a memory, start trying 
to map it to a RAM, realise that's not the right thing to do, then 
royally stuff up the generated code).  In both cases it gives a warning 
(which means I can (automatically) search for it to know whether I'm 
triggering that bug.

I'm not even going to bother reporting them.  The process is just too 
difficult.


To contradict myself, here's an example of a bug report of mine on the 
Xilinx forum that was handled well, and the bug was actually fixed (it 
took about a year, but they did fix it).
https://forums.xilinx.com/t5/Synthesis/Bug-report-Vivado-VHDL-assertion-
still-broken/m-p/842926
That one had a small, self contained test case though.  In my experience, 
most tool bugs will only show up on a massive design, and refuse to 
manifest themselves on a snall test case.

Allan

Article: 161429
Subject: Re: VHDL TIME support in Vivado
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Sun, 11 Aug 2019 08:51:52 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Sunday, August 11, 2019 at 4:46:34 AM UTC-4, Allan Herriman wrote:
> On Fri, 09 Aug 2019 11:42:10 -0700, Rick C wrote:
>=20
> > On Friday, August 9, 2019 at 1:49:59 PM UTC-4, Rob Gaddi wrote:
> >> Y'all.  It's 2019.  TIME has been in VHDL since what, 1987?  And yet
> >> Vivado remains unable to successfully divide an amount of time you wan=
t
> >> to wait by a clock period to get a compile-time integer.
> >>=20
> >> https://www.xilinx.com/support/answers/57964.html is from 2014.  Five
> >> years.  In five years, Xilinx has remained unable to perform simple
> >> division.  Absolutely embarrassing.
> >=20
> > Obviously Xilinx customers aren't asking for it.  After all, Xilinx is
> > very responsive to their customers, no?
>=20
>=20
> I can't tell whether you're being sarcastic.  Xilinx /should/ be=20
> responsive to their customers.  My experience has been that they're=20
> typically the opposite of responsive, when it comes to bugs or (lack of)=
=20
> HDL support in the tools.
>=20
> I've found two synthesis bugs in Vivado in the past few weeks (one=20
> Verilog regarding incorrect macro expansion, the other VHDL where it will=
=20
> sometimes think a large 2D array of std_logic is a memory, start trying=
=20
> to map it to a RAM, realise that's not the right thing to do, then=20
> royally stuff up the generated code).  In both cases it gives a warning=
=20
> (which means I can (automatically) search for it to know whether I'm=20
> triggering that bug.
>=20
> I'm not even going to bother reporting them.  The process is just too=20
> difficult.
>=20
>=20
> To contradict myself, here's an example of a bug report of mine on the=20
> Xilinx forum that was handled well, and the bug was actually fixed (it=20
> took about a year, but they did fix it).
> https://forums.xilinx.com/t5/Synthesis/Bug-report-Vivado-VHDL-assertion-
> still-broken/m-p/842926
> That one had a small, self contained test case though.  In my experience,=
=20
> most tool bugs will only show up on a massive design, and refuse to=20
> manifest themselves on a snall test case.
>=20
> Allan

Yes, I was being sarcastic.  But in their defense it is no easier for them =
to fix difficult to reproduce bugs than it is for us to report them.  If th=
ere is no failure condition to reproduce the bug, how can we expect them to=
 fix it?=20

I worked on a project once where the company knew of a problem with the too=
ls that made the timing analysis invalid.  We said it was the tool, Altera =
said it must be our constraints.  That's the one total failure of constrain=
ts.  You can do many things to verify your design, but how do you verify yo=
ur timing constraints?  If they aren't right the timing can fail which will=
 likely be very sporadic and intermittent - hard to detect and diagnose.=20

Bottom line is FPGA companies target their largest customers.  Anyone else =
gets whatever is left over.  The Lattice FAE has been helpful to me more th=
an once.  On one occasion he gave me enough info to allow me to program a s=
lightly different part with the same bit stream file so I didn't have to re=
build the sources and requalify the design.  That was fantastic saving me t=
ons of work.  I couldn't get the Indian support guys to even acknowledge it=
 was possible.=20

--=20

  Rick C.

  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209

Article: 161430
Subject: Re: VHDL TIME support in Vivado
From: Rob Gaddi <rgaddi@highlandtechnology.invalid>
Date: Mon, 12 Aug 2019 09:56:04 -0700
Links: << >>  << T >>  << A >>
On 8/9/19 7:48 PM, KJ wrote:
> On Friday, August 9, 2019 at 1:49:59 PM UTC-4, Rob Gaddi wrote:
>> Y'all.  It's 2019.  TIME has been in VHDL since what, 1987?  And yet Vivado
>> remains unable to successfully divide an amount of time you want to wait by a
>> clock period to get a compile-time integer.
>>
> Well, when the numbers get too big apparently...
> 
>> https://www.xilinx.com/support/answers/57964.html is from 2014.  Five years.  In
>> five years, Xilinx has remained unable to perform simple division.  Absolutely
>> embarrassing.
> You're not representing that post accurately.  The post showed four examples of time divided by time and it appears that three of the four did synthesize properly.  Only in the fourth case, 1000000 ns / 25 ns is a warning generated by the tool and the synthesis fails.  That would hardly qualify as "remained unable to perform simple division" in my book.
> 
> They do go on to say "Using time for integer calculations should be avoided and is not a recommended coding style supported by Vivado Synthesis.".  The stated rationale is "Vivado Synthesis has a limitation to the precision that is supported for time that goes in accordance with the LRM" but it's not clear what part of the LRM they are discussing or which range of time Vivado does accept.  At least they do recognize it as a deficiency, they are kicking out a warning message so kudos there.  But I agree that the suggested work around is weak.  The lack of a defined time range that is supported is also weak (unless it is mentioned in the documentation, I'm only looking at the post).
> 
> It's also not clear either whether if it had been written 'smaller' numbers if it would have worked (such as 1 sec / 25 ns) but I'm guessing not.
> 
> Kevin Jennings
> 

In my particular case, the numbers in question were 30 us / 8.333 ns, which 
should come out about 3600.  Instead I got -17.  Fortunately I declared my 
constants as natural rather than integer so it at least failed early rather than 
get all the way through to a broken bitstream.

They mention that "Vivado Synthesis has a limitation to the precision that is 
supported for time that goes in accordance with the LRM" but provide no 
information what that limit IS, instead just opting for the statement you quote 
above of "Time is hard, so we just have no expectation of getting it right."

When I run simulations there's an option to explicitly set the time resolution. 
I can't find any such option in Vivado, so I can't design with the foggiest idea 
what does or does not fit in what is almost certainly a signed 32-bit time.  But 
at  can't imagine any practical resolution of time in which 30 us doesn't fit. 
I suppose it must be using 1 fs, which feels perhaps excessive.  But they don't 
talk about it, they just mention in an AR that it's "not a recommended coding 
style".

-- 
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order.  See above to fix.

Article: 161431
Subject: Re: VHDL TIME support in Vivado
From: Rob Gaddi <rgaddi@highlandtechnology.invalid>
Date: Mon, 12 Aug 2019 10:16:41 -0700
Links: << >>  << T >>  << A >>
On 8/11/19 8:51 AM, Rick C wrote:
> On Sunday, August 11, 2019 at 4:46:34 AM UTC-4, Allan Herriman wrote:
>> On Fri, 09 Aug 2019 11:42:10 -0700, Rick C wrote:
>>
>>> On Friday, August 9, 2019 at 1:49:59 PM UTC-4, Rob Gaddi wrote:
>>>> Y'all.  It's 2019.  TIME has been in VHDL since what, 1987?  And yet
>>>> Vivado remains unable to successfully divide an amount of time you want
>>>> to wait by a clock period to get a compile-time integer.
>>>>
>>>> https://www.xilinx.com/support/answers/57964.html is from 2014.  Five
>>>> years.  In five years, Xilinx has remained unable to perform simple
>>>> division.  Absolutely embarrassing.
>>>
>>> Obviously Xilinx customers aren't asking for it.  After all, Xilinx is
>>> very responsive to their customers, no?
>>
>>
>> I can't tell whether you're being sarcastic.  Xilinx /should/ be
>> responsive to their customers.  My experience has been that they're
>> typically the opposite of responsive, when it comes to bugs or (lack of)
>> HDL support in the tools.
>>
>> I've found two synthesis bugs in Vivado in the past few weeks (one
>> Verilog regarding incorrect macro expansion, the other VHDL where it will
>> sometimes think a large 2D array of std_logic is a memory, start trying
>> to map it to a RAM, realise that's not the right thing to do, then
>> royally stuff up the generated code).  In both cases it gives a warning
>> (which means I can (automatically) search for it to know whether I'm
>> triggering that bug.
>>
>> I'm not even going to bother reporting them.  The process is just too
>> difficult.
>>
>>
>> To contradict myself, here's an example of a bug report of mine on the
>> Xilinx forum that was handled well, and the bug was actually fixed (it
>> took about a year, but they did fix it).
>> https://forums.xilinx.com/t5/Synthesis/Bug-report-Vivado-VHDL-assertion-
>> still-broken/m-p/842926
>> That one had a small, self contained test case though.  In my experience,
>> most tool bugs will only show up on a massive design, and refuse to
>> manifest themselves on a snall test case.
>>
>> Allan
> 
> Yes, I was being sarcastic.  But in their defense it is no easier for them to fix difficult to reproduce bugs than it is for us to report them.  If there is no failure condition to reproduce the bug, how can we expect them to fix it?
> 
> I worked on a project once where the company knew of a problem with the tools that made the timing analysis invalid.  We said it was the tool, Altera said it must be our constraints.  That's the one total failure of constraints.  You can do many things to verify your design, but how do you verify your timing constraints?  If they aren't right the timing can fail which will likely be very sporadic and intermittent - hard to detect and diagnose.
> 
> Bottom line is FPGA companies target their largest customers.  Anyone else gets whatever is left over.  The Lattice FAE has been helpful to me more than once.  On one occasion he gave me enough info to allow me to program a slightly different part with the same bit stream file so I didn't have to rebuild the sources and requalify the design.  That was fantastic saving me tons of work.  I couldn't get the Indian support guys to even acknowledge it was possible.
> 

And yet, in those "difficult to reproduce bugs" situations?  I've often been in 
a scenario where I can say "You know what, there's nothing NDAed in this design, 
I really can send you the whole thing.  I don't know what's wrong in it, but 
this crashes your code, and you can trace it through."  What I get back is 
"Yeah, but can you work up a smaller test case?"  No, I can't.  That's not what 
I get paid to do.  I have isolated a specific failure condition by dumb luck, 
and am happy to share it, but spending a day trying to turn it into a general 
problem statement?  That's for someone on the vendor's payroll.

-- 
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order.  See above to fix.

Article: 161432
Subject: Re: VHDL TIME support in Vivado
From: KJ <kkjennings@sbcglobal.net>
Date: Mon, 12 Aug 2019 11:26:33 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Monday, August 12, 2019 at 12:56:10 PM UTC-4, Rob Gaddi wrote:
> On 8/9/19 7:48 PM, KJ wrote:

> When I run simulations there's an option to explicitly set the time resol=
ution.=20
That would be useful sure, but that is not part of the LRM...

> I can't find any such option in Vivado, so I can't design with the foggie=
st idea=20
> what does or does not fit in what is almost certainly a signed 32-bit tim=
e.=20

I would guess that 2^31-1 fs might be the upper end of the range.  The LRM =
says "The only predefined physical type is type TIME. The range of TIME is =
implementation dependent, but it is guaranteed to include the range =E2=80=
=932147483647 to +2147483647."

> But at  can't imagine any practical resolution of time in which 30 us doe=
sn't fit.=20

30us =3D 3E10 fs which is larger than 2^31 fs.  But the thing I don't get i=
s that in the AR, they had examples of 10us and 100us (actually 10,000 ns a=
nd 100,000 ns) and stated that Vivado worked properly.  2^31 fs is only 2.1=
47 us so if the limit is 2^31 fs, those cases should not have worked either=
.

> I suppose it must be using 1 fs, which feels perhaps excessive.
But compliant to the minimum standard of the LRM.  That expanding to 64 bit=
s didn't come into play until VHDL-2017, oops 2018, oops 2019, oops??  When=
ever the expanded range comes into play, it would not be implemented by too=
ls until a long time afterwards.

Kevin

Article: 161433
Subject: Re: VHDL TIME support in Vivado
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Mon, 12 Aug 2019 13:52:23 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Monday, August 12, 2019 at 2:26:36 PM UTC-4, KJ wrote:
> On Monday, August 12, 2019 at 12:56:10 PM UTC-4, Rob Gaddi wrote:
> > On 8/9/19 7:48 PM, KJ wrote:
>=20
> > When I run simulations there's an option to explicitly set the time res=
olution.=20
> That would be useful sure, but that is not part of the LRM...
>=20
> > I can't find any such option in Vivado, so I can't design with the fogg=
iest idea=20
> > what does or does not fit in what is almost certainly a signed 32-bit t=
ime.=20
>=20
> I would guess that 2^31-1 fs might be the upper end of the range.  The LR=
M says "The only predefined physical type is type TIME. The range of TIME i=
s implementation dependent, but it is guaranteed to include the range =E2=
=80=932147483647 to +2147483647."
>=20
> > But at  can't imagine any practical resolution of time in which 30 us d=
oesn't fit.=20
>=20
> 30us =3D 3E10 fs which is larger than 2^31 fs.  But the thing I don't get=
 is that in the AR, they had examples of 10us and 100us (actually 10,000 ns=
 and 100,000 ns) and stated that Vivado worked properly.  2^31 fs is only 2=
.147 us so if the limit is 2^31 fs, those cases should not have worked eith=
er.
>=20
> > I suppose it must be using 1 fs, which feels perhaps excessive.
> But compliant to the minimum standard of the LRM.  That expanding to 64 b=
its didn't come into play until VHDL-2017, oops 2018, oops 2019, oops??  Wh=
enever the expanded range comes into play, it would not be implemented by t=
ools until a long time afterwards.

I've always thought it was odd that VHDL uses fs for time, but only 32 bit =
integers.  Even today, is there much going on that ps is not good enough fo=
r?  Certainly it is a bit of a PITA that integers are limited to 32 bits.=
=20

--=20

  Rick C.

  -- Get 1,000 miles of free Supercharging
  -- Tesla referral code - https://ts.la/richard11209

Article: 161434
Subject: Re: VHDL TIME support in Vivado
From: Rob Gaddi <rgaddi@highlandtechnology.invalid>
Date: Mon, 12 Aug 2019 14:12:44 -0700
Links: << >>  << T >>  << A >>
On 8/12/19 1:52 PM, Rick C wrote:
> On Monday, August 12, 2019 at 2:26:36 PM UTC-4, KJ wrote:
>> On Monday, August 12, 2019 at 12:56:10 PM UTC-4, Rob Gaddi wrote:
>>> On 8/9/19 7:48 PM, KJ wrote:
>>
>>> When I run simulations there's an option to explicitly set the time resolution.
>> That would be useful sure, but that is not part of the LRM...
>>
>>> I can't find any such option in Vivado, so I can't design with the foggiest idea
>>> what does or does not fit in what is almost certainly a signed 32-bit time.
>>
>> I would guess that 2^31-1 fs might be the upper end of the range.  The LRM says "The only predefined physical type is type TIME. The range of TIME is implementation dependent, but it is guaranteed to include the range –2147483647 to +2147483647."
>>
>>> But at  can't imagine any practical resolution of time in which 30 us doesn't fit.
>>
>> 30us = 3E10 fs which is larger than 2^31 fs.  But the thing I don't get is that in the AR, they had examples of 10us and 100us (actually 10,000 ns and 100,000 ns) and stated that Vivado worked properly.  2^31 fs is only 2.147 us so if the limit is 2^31 fs, those cases should not have worked either.
>>
>>> I suppose it must be using 1 fs, which feels perhaps excessive.
>> But compliant to the minimum standard of the LRM.  That expanding to 64 bits didn't come into play until VHDL-2017, oops 2018, oops 2019, oops??  Whenever the expanded range comes into play, it would not be implemented by tools until a long time afterwards.
> 
> I've always thought it was odd that VHDL uses fs for time, but only 32 bit integers.  Even today, is there much going on that ps is not good enough for?  Certainly it is a bit of a PITA that integers are limited to 32 bits.
> 

Nothing in any VHDL specification through 2008 prevents you from using 64-bit 
integers, but nothing demands it either.  Most simulators I've seen these days 
do in fact use 64, but Xilinx seems to have decided not to.

-- 
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order.  See above to fix.

Article: 161435
Subject: Re: VHDL TIME support in Vivado
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Mon, 12 Aug 2019 15:06:08 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Monday, August 12, 2019 at 5:12:47 PM UTC-4, Rob Gaddi wrote:
> On 8/12/19 1:52 PM, Rick C wrote:
> > On Monday, August 12, 2019 at 2:26:36 PM UTC-4, KJ wrote:
> >> On Monday, August 12, 2019 at 12:56:10 PM UTC-4, Rob Gaddi wrote:
> >>> On 8/9/19 7:48 PM, KJ wrote:
> >>
> >>> When I run simulations there's an option to explicitly set the time r=
esolution.
> >> That would be useful sure, but that is not part of the LRM...
> >>
> >>> I can't find any such option in Vivado, so I can't design with the fo=
ggiest idea
> >>> what does or does not fit in what is almost certainly a signed 32-bit=
 time.
> >>
> >> I would guess that 2^31-1 fs might be the upper end of the range.  The=
 LRM says "The only predefined physical type is type TIME. The range of TIM=
E is implementation dependent, but it is guaranteed to include the range =
=E2=80=932147483647 to +2147483647."
> >>
> >>> But at  can't imagine any practical resolution of time in which 30 us=
 doesn't fit.
> >>
> >> 30us =3D 3E10 fs which is larger than 2^31 fs.  But the thing I don't =
get is that in the AR, they had examples of 10us and 100us (actually 10,000=
 ns and 100,000 ns) and stated that Vivado worked properly.  2^31 fs is onl=
y 2.147 us so if the limit is 2^31 fs, those cases should not have worked e=
ither.
> >>
> >>> I suppose it must be using 1 fs, which feels perhaps excessive.
> >> But compliant to the minimum standard of the LRM.  That expanding to 6=
4 bits didn't come into play until VHDL-2017, oops 2018, oops 2019, oops?? =
 Whenever the expanded range comes into play, it would not be implemented b=
y tools until a long time afterwards.
> >=20
> > I've always thought it was odd that VHDL uses fs for time, but only 32 =
bit integers.  Even today, is there much going on that ps is not good enoug=
h for?  Certainly it is a bit of a PITA that integers are limited to 32 bit=
s.
> >=20
>=20
> Nothing in any VHDL specification through 2008 prevents you from using 64=
-bit=20
> integers, but nothing demands it either.  Most simulators I've seen these=
 days=20
> do in fact use 64, but Xilinx seems to have decided not to.

Huh?  The issue is what the synthesizer uses.  If you can't count on intege=
rs being more than 32 bits, you can't write synthesizable code for integers=
 being more than 32 bits.=20

What good does it do for some systems to use 64 bit integers unless you don=
't want portable code?=20

--=20

  Rick C.

  -+ Get 1,000 miles of free Supercharging
  -+ Tesla referral code - https://ts.la/richard11209

Article: 161436
Subject: Re: VHDL TIME support in Vivado
From: Rob Gaddi <rgaddi@highlandtechnology.invalid>
Date: Mon, 12 Aug 2019 15:48:09 -0700
Links: << >>  << T >>  << A >>
On 8/12/19 3:06 PM, Rick C wrote:
> On Monday, August 12, 2019 at 5:12:47 PM UTC-4, Rob Gaddi wrote:
>> On 8/12/19 1:52 PM, Rick C wrote:
>>> On Monday, August 12, 2019 at 2:26:36 PM UTC-4, KJ wrote:
>>>> On Monday, August 12, 2019 at 12:56:10 PM UTC-4, Rob Gaddi wrote:
>>>>> On 8/9/19 7:48 PM, KJ wrote:
>>>>
>>>>> When I run simulations there's an option to explicitly set the time resolution.
>>>> That would be useful sure, but that is not part of the LRM...
>>>>
>>>>> I can't find any such option in Vivado, so I can't design with the foggiest idea
>>>>> what does or does not fit in what is almost certainly a signed 32-bit time.
>>>>
>>>> I would guess that 2^31-1 fs might be the upper end of the range.  The LRM says "The only predefined physical type is type TIME. The range of TIME is implementation dependent, but it is guaranteed to include the range –2147483647 to +2147483647."
>>>>
>>>>> But at  can't imagine any practical resolution of time in which 30 us doesn't fit.
>>>>
>>>> 30us = 3E10 fs which is larger than 2^31 fs.  But the thing I don't get is that in the AR, they had examples of 10us and 100us (actually 10,000 ns and 100,000 ns) and stated that Vivado worked properly.  2^31 fs is only 2.147 us so if the limit is 2^31 fs, those cases should not have worked either.
>>>>
>>>>> I suppose it must be using 1 fs, which feels perhaps excessive.
>>>> But compliant to the minimum standard of the LRM.  That expanding to 64 bits didn't come into play until VHDL-2017, oops 2018, oops 2019, oops??  Whenever the expanded range comes into play, it would not be implemented by tools until a long time afterwards.
>>>
>>> I've always thought it was odd that VHDL uses fs for time, but only 32 bit integers.  Even today, is there much going on that ps is not good enough for?  Certainly it is a bit of a PITA that integers are limited to 32 bits.
>>>
>>
>> Nothing in any VHDL specification through 2008 prevents you from using 64-bit
>> integers, but nothing demands it either.  Most simulators I've seen these days
>> do in fact use 64, but Xilinx seems to have decided not to.
> 
> Huh?  The issue is what the synthesizer uses.  If you can't count on integers being more than 32 bits, you can't write synthesizable code for integers being more than 32 bits.
> 
> What good does it do for some systems to use 64 bit integers unless you don't want portable code?
> 

Well, we're talking about the specific case of TIME calculations in 
synthesizable code here, which means that you're always doing things that cook 
down at compile time to a constant integer value, and often a fairly small 
constant integer.  So a tool supporting TIME on a 64-bit integer rather than 32 
simply allows it to get those calculations correct rather than endaround and say 
that you need negative numbers of clock ticks.

It's not an unreasonable thing, given that any PC sold in the last what, 
decade?, has a 64-bit processor in.  Certainly given that Vivado was just 
written from scratch to replace ISE, the opportunity was certainly there to 
support at least TIME and probably all integers as 64-bit.

-- 
Rob Gaddi, Highland Technology -- www.highlandtechnology.com
Email address domain is currently out of order.  See above to fix.

Article: 161437
Subject: Re: VHDL TIME support in Vivado
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Mon, 12 Aug 2019 16:46:01 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Monday, August 12, 2019 at 6:48:13 PM UTC-4, Rob Gaddi wrote:
> On 8/12/19 3:06 PM, Rick C wrote:
> > On Monday, August 12, 2019 at 5:12:47 PM UTC-4, Rob Gaddi wrote:
> >> On 8/12/19 1:52 PM, Rick C wrote:
> >>> On Monday, August 12, 2019 at 2:26:36 PM UTC-4, KJ wrote:
> >>>> On Monday, August 12, 2019 at 12:56:10 PM UTC-4, Rob Gaddi wrote:
> >>>>> On 8/9/19 7:48 PM, KJ wrote:
> >>>>
> >>>>> When I run simulations there's an option to explicitly set the time=
 resolution.
> >>>> That would be useful sure, but that is not part of the LRM...
> >>>>
> >>>>> I can't find any such option in Vivado, so I can't design with the =
foggiest idea
> >>>>> what does or does not fit in what is almost certainly a signed 32-b=
it time.
> >>>>
> >>>> I would guess that 2^31-1 fs might be the upper end of the range.  T=
he LRM says "The only predefined physical type is type TIME. The range of T=
IME is implementation dependent, but it is guaranteed to include the range =
=E2=80=932147483647 to +2147483647."
> >>>>
> >>>>> But at  can't imagine any practical resolution of time in which 30 =
us doesn't fit.
> >>>>
> >>>> 30us =3D 3E10 fs which is larger than 2^31 fs.  But the thing I don'=
t get is that in the AR, they had examples of 10us and 100us (actually 10,0=
00 ns and 100,000 ns) and stated that Vivado worked properly.  2^31 fs is o=
nly 2.147 us so if the limit is 2^31 fs, those cases should not have worked=
 either.
> >>>>
> >>>>> I suppose it must be using 1 fs, which feels perhaps excessive.
> >>>> But compliant to the minimum standard of the LRM.  That expanding to=
 64 bits didn't come into play until VHDL-2017, oops 2018, oops 2019, oops?=
?  Whenever the expanded range comes into play, it would not be implemented=
 by tools until a long time afterwards.
> >>>
> >>> I've always thought it was odd that VHDL uses fs for time, but only 3=
2 bit integers.  Even today, is there much going on that ps is not good eno=
ugh for?  Certainly it is a bit of a PITA that integers are limited to 32 b=
its.
> >>>
> >>
> >> Nothing in any VHDL specification through 2008 prevents you from using=
 64-bit
> >> integers, but nothing demands it either.  Most simulators I've seen th=
ese days
> >> do in fact use 64, but Xilinx seems to have decided not to.
> >=20
> > Huh?  The issue is what the synthesizer uses.  If you can't count on in=
tegers being more than 32 bits, you can't write synthesizable code for inte=
gers being more than 32 bits.
> >=20
> > What good does it do for some systems to use 64 bit integers unless you=
 don't want portable code?
> >=20
>=20
> Well, we're talking about the specific case of TIME calculations in=20
> synthesizable code here, which means that you're always doing things that=
 cook=20
> down at compile time to a constant integer value, and often a fairly smal=
l=20
> constant integer.  So a tool supporting TIME on a 64-bit integer rather t=
han 32=20
> simply allows it to get those calculations correct rather than endaround =
and say=20
> that you need negative numbers of clock ticks.
>=20
> It's not an unreasonable thing, given that any PC sold in the last what,=
=20
> decade?, has a 64-bit processor in.  Certainly given that Vivado was just=
=20
> written from scratch to replace ISE, the opportunity was certainly there =
to=20
> support at least TIME and probably all integers as 64-bit.
>=20
> --=20
> Rob Gaddi, Highland Technology -- www.highlandtechnology.com
> Email address domain is currently out of order.  See above to fix.

Doesn't changing integers to 64 bits have potential of breaking code?  I wo=
uld expect they would need a way to specify 64 vs. 32 bits for integers or =
even a separate type.=20

--=20

  Rick C.

  + Get 1,000 miles of free Supercharging
  + Tesla referral code - https://ts.la/richard11209

Article: 161438
Subject: Re: Philips LA PM3585 disassembler software wanted
From: <smed@none.i2p>
Date: Wed, 28 Aug 2019 20:13:00 +0000
Links: << >>  << T >>  << A >>
Hi,
I would be highly interested in the PM3585 disassembler software

Regards
smed


Article: 161439
Subject: Re: Philips LA PM3585 disassembler software wanted
From: frankcovending@gmail.com
Date: Mon, 2 Sep 2019 18:18:59 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Wednesday, August 28, 2019 at 4:13:03 PM UTC-4, sm...@none.i2p wrote:
> Hi,
> I would be highly interested in the PM3585 disassembler software
> 
> Regards
> smed

Which ones?

Article: 161440
Subject: PipelineC (again), dct example, looking for help/interest
From: Julian Kemmerer <absurdfatalism@gmail.com>
Date: Sat, 7 Sep 2019 13:11:04 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hi folks looking for feedback on PipelineC. Ideas of what to implement next=
.=20

I will point you to a recent reddit post which ultimately points to GitHub.=
=20

https://www.reddit.com/r/FPGA/comments/d0x2p5/serial_8x8_dct_in_pipelinec_l=
ower_resource_usage/=20

Here is the code to get you interested:=20

// This is the unrolled version of the original dct copy-and-pasted algorit=
hm=20
// https://www.geeksforgeeks.org/discrete-cosine-transform-algorithm-progra=
m/=20
// PipelineC iterations of dctTransformUnrolled are used=20
// to unroll the calculation serially in O(n^4) time=20

// Input 'matrix' and start=3D1 to begin calculation=20
// Input 'matrix' must stay constant until return .done=20

// 'sum' accumulates over iterations/clocks and should be pipelined=20
// So 'sum' must be a volatile global variable=20
// Keep track of when sum is valid and be read+written=20
volatile uint1_t dct_volatiles_valid;=20
// sum will temporarily store the sum of cosine signals=20
volatile float dct_sum;=20
// dct_result will store the discrete cosine transform=20
// Signal that this is the iteration containing the 'done' result=20
typedef struct dct_done_t=20
{=20
        float matrix[DCT_M][DCT_N];=20
        uint1_t done;=20
} dct_done_t;=20
volatile dct_done_t dct_result;=20
dct_done_t dctTransformUnrolled(dct_pixel_t matrix[DCT_M][DCT_N], uint1_t s=
tart)=20
{=20
        // Assume not done yet=20
        dct_result.done =3D 0;=20
        =20
        // Start validates volatiles=20
        if(start)=20
        {=20
                dct_volatiles_valid =3D 1;=20
        }=20
        =20
        // Global func to handle getting to BRAM=20
        //     1) Lookup constants from BRAM (using iterators)=20
        //     2) Increment iterators=20
        // Returns next iterators and constants and will increment when req=
uested=20
        dct_lookup_increment_t lookup_increment;=20
        uint1_t do_increment;=20
        // Only increment when volatiles valid=20
        do_increment =3D dct_volatiles_valid;=20
        lookup_increment =3D dct_lookup_increment(do_increment);=20
        =20
        // Unpack struct for ease of reading calculation code below=20
        float const_val;=20
        const_val =3D lookup_increment.lookup.const_val;=20
        float cos_val;=20
        cos_val =3D lookup_increment.lookup.cos_val;=20
        dct_iter_t i;=20
        i =3D lookup_increment.incrementer.curr_iters.i;=20
        dct_iter_t j;=20
        j =3D lookup_increment.incrementer.curr_iters.j;=20
        dct_iter_t k;=20
        k =3D lookup_increment.incrementer.curr_iters.k;=20
        dct_iter_t l;=20
        l =3D lookup_increment.incrementer.curr_iters.l;=20
        uint1_t reset_k;=20
        reset_k =3D lookup_increment.incrementer.increment.reset_k;=20
        uint1_t reset_l;=20
        reset_l =3D lookup_increment.incrementer.increment.reset_l;=20
        uint1_t done;=20
        done =3D lookup_increment.incrementer.increment.done;=20
        =20
        =20
        // Do math for this volatile iteration only when=20
        // can safely read+write volatiles=20
        if(dct_volatiles_valid)=20
        {=20
                // ~~~ The primary calculation ~~~:=20
                // 1) Float * cosine constant from lookup table  =20
                float dct1;=20
                dct1 =3D (float)matrix[k][l] * cos_val;=20
                // 2) Increment sum=20
                dct_sum =3D dct_sum + dct1;=20
                // 3) constant * Float and assign into the output matrix=20
                dct_result.matrix[i][j] =3D const_val * dct_sum;=20
                =20
                // Sum accumulates during the k and l loops=20
                // So reset when they are rolling over=20
                if(reset_k & reset_l)=20
                {=20
                        dct_sum =3D 0.0;=20
                }=20
                =20
                // Done yet?=20
                dct_result.done =3D done;=20
                =20
                // Reset volatiles once done=20
                if(done)=20
                {=20
                        dct_volatiles_valid =3D 0;=20
                }=20
        }=20
        =20
        return dct_result;=20
}=20


What does this synthesize to?=20

Essentially a state machine where each state uses the same N clocks worth o=
f logic to do work. (the body of dctTransformUnrolled).=20

Consider the 'execution' of the function in time order. The logic consists =
of:=20

~17% of time for getting lookup constants & incrementing the iterators (dct=
_lookup_increment), reading the [k][l] value out of input 'matrix'=20

~21% of time for the 1) Float * cosine constant from lookup table, a floati=
ng point multiplier=20

~34% of time for the 2) Increment sum addition, a floating point adder=20

~21% of time for the 3) constant * Float, a floating point multiplier=20

~5% of time for the 3) assignment into the output matrix at [i][j]=20

That pipeline takes some fixed number of clock cycles N. That means every N=
 clock cycles 'dct_volatiles_valid' will =3D1 (after being set at the start=
). The algorithm unrolls as O(n^4) for 4096 total iterations. So the total =
latency in clock cycles is N * 4096.

Article: 161441
Subject: PipelineC (again), dct example, looking for help/interest
From: Julian Kemmerer <absurdfatalism@gmail.com>
Date: Sat, 7 Sep 2019 13:12:33 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hi folks looking for feedback on PipelineC. Ideas of what to implement next=
.=20

I will point you to a recent reddit post which ultimately points to GitHub.=
=20

https://www.reddit.com/r/FPGA/comments/d0x2p5/serial_8x8_dct_in_pipelinec_l=
ower_resource_usage/=20

Here is the code to get you interested:=20

// This is the unrolled version of the original dct copy-and-pasted algorit=
hm=20
// https://www.geeksforgeeks.org/discrete-cosine-transform-algorithm-progra=
m/=20
// PipelineC iterations of dctTransformUnrolled are used=20
// to unroll the calculation serially in O(n^4) time=20

// Input 'matrix' and start=3D1 to begin calculation=20
// Input 'matrix' must stay constant until return .done=20

// 'sum' accumulates over iterations/clocks and should be pipelined=20
// So 'sum' must be a volatile global variable=20
// Keep track of when sum is valid and be read+written=20
volatile uint1_t dct_volatiles_valid;=20
// sum will temporarily store the sum of cosine signals=20
volatile float dct_sum;=20
// dct_result will store the discrete cosine transform=20
// Signal that this is the iteration containing the 'done' result=20
typedef struct dct_done_t=20
{=20
        float matrix[DCT_M][DCT_N];=20
        uint1_t done;=20
} dct_done_t;=20
volatile dct_done_t dct_result;=20
dct_done_t dctTransformUnrolled(dct_pixel_t matrix[DCT_M][DCT_N], uint1_t s=
tart)=20
{=20
        // Assume not done yet=20
        dct_result.done =3D 0;=20
        =20
        // Start validates volatiles=20
        if(start)=20
        {=20
                dct_volatiles_valid =3D 1;=20
        }=20
        =20
        // Global func to handle getting to BRAM=20
        //     1) Lookup constants from BRAM (using iterators)=20
        //     2) Increment iterators=20
        // Returns next iterators and constants and will increment when req=
uested=20
        dct_lookup_increment_t lookup_increment;=20
        uint1_t do_increment;=20
        // Only increment when volatiles valid=20
        do_increment =3D dct_volatiles_valid;=20
        lookup_increment =3D dct_lookup_increment(do_increment);=20
        =20
        // Unpack struct for ease of reading calculation code below=20
        float const_val;=20
        const_val =3D lookup_increment.lookup.const_val;=20
        float cos_val;=20
        cos_val =3D lookup_increment.lookup.cos_val;=20
        dct_iter_t i;=20
        i =3D lookup_increment.incrementer.curr_iters.i;=20
        dct_iter_t j;=20
        j =3D lookup_increment.incrementer.curr_iters.j;=20
        dct_iter_t k;=20
        k =3D lookup_increment.incrementer.curr_iters.k;=20
        dct_iter_t l;=20
        l =3D lookup_increment.incrementer.curr_iters.l;=20
        uint1_t reset_k;=20
        reset_k =3D lookup_increment.incrementer.increment.reset_k;=20
        uint1_t reset_l;=20
        reset_l =3D lookup_increment.incrementer.increment.reset_l;=20
        uint1_t done;=20
        done =3D lookup_increment.incrementer.increment.done;=20
        =20
        =20
        // Do math for this volatile iteration only when=20
        // can safely read+write volatiles=20
        if(dct_volatiles_valid)=20
        {=20
                // ~~~ The primary calculation ~~~:=20
                // 1) Float * cosine constant from lookup table  =20
                float dct1;=20
                dct1 =3D (float)matrix[k][l] * cos_val;=20
                // 2) Increment sum=20
                dct_sum =3D dct_sum + dct1;=20
                // 3) constant * Float and assign into the output matrix=20
                dct_result.matrix[i][j] =3D const_val * dct_sum;=20
                =20
                // Sum accumulates during the k and l loops=20
                // So reset when they are rolling over=20
                if(reset_k & reset_l)=20
                {=20
                        dct_sum =3D 0.0;=20
                }=20
                =20
                // Done yet?=20
                dct_result.done =3D done;=20
                =20
                // Reset volatiles once done=20
                if(done)=20
                {=20
                        dct_volatiles_valid =3D 0;=20
                }=20
        }=20
        =20
        return dct_result;=20
}=20
What does this synthesize to?=20

Essentially a state machine where each state uses the same N clocks worth o=
f logic to do work. (the body of dctTransformUnrolled).=20

Consider the 'execution' of the function in time order. The logic consists =
of:=20

~17% of time for getting lookup constants & incrementing the iterators (dct=
_lookup_increment), reading the [k][l] value out of input 'matrix'=20

~21% of time for the 1) Float * cosine constant from lookup table, a floati=
ng point multiplier=20

~34% of time for the 2) Increment sum addition, a floating point adder=20

~21% of time for the 3) constant * Float, a floating point multiplier=20

~5% of time for the 3) assignment into the output matrix at [i][j]=20

That pipeline takes some fixed number of clock cycles N. That means every N=
 clock cycles 'dct_volatiles_valid' will =3D1 (after being set at the start=
). The algorithm unrolls as O(n^4) for 4096 total iterations. So the total =
latency in clock cycles is N * 4096.

  

Article: 161442
Subject: Re: How to write a correct code to do 2 writes to an array on same cycle?
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Wed, 25 Sep 2019 09:00:32 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Tuesday, September 24, 2019 at 2:49:14 PM UTC-4, Weng Tianxiang wrote:
> Hi,
>=20
> Here is a code segment showing 2 methods doing 2 writes to an array with =
 2 different addresses on the same cycle:
>=20
> 1.=20
> p1: process(CLK) is
> begin
>   if CLK'event and CLK =3D '1' then
>     if C1 then
>       An_Array(a) <=3D D1;
>     end if;
>=20
>     -- I know a /=3D b
>     -- do I need to inform VHDL compiler of 2 different addresses?
>     if C2 then
>       An_Array(b) <=3D D2; =20
>     end if;
>   end if;
> end process;

I think the short answer is YES, the compiler has no way of knowing you are=
 writing to different array elements.  More importantly what hardware are y=
ou expecting this to synthesize?  Simulation is one thing, but I don't know=
 this sort of construct is clear enough to produce any particular hardware.=
  Also, are you targeting an FPGA or a custom chip?  I expect there to pote=
ntially be a difference.=20


> 2.=20
> p2: process(CLK) is
> begin
>   if CLK'event and CLK =3D '1' then
>     case C1 & C2 is
>       when "10" =3D>
>         An_Array(a) <=3D D1;
>=20
>       when "01" =3D>
>         An_Array(b) <=3D D2;
>=20
>       when "11" =3D>
>         -- I know a /=3D b
>         -- do I need to inform VHDL compiler of 2 different addresses?
>         An_Array(a) <=3D D1;
>         An_Array(b) <=3D D2;
>=20
>       when others =3D>
>         null;
>     end case;
>   end if;
> end process;
>=20
> I think it is no problem with a simulator.

Again, the same issue.  I believe this code is equivalent to the first exam=
ple.  The assignment statements tend to generate drivers and you may get tw=
o drivers on the address input to the array.  I expect you are intending to=
 generate a dual port RAM with two separate inputs for address and data wit=
h separate write capability.  In FPGAs I use the sample code provided by th=
e vendor to infer RAMs.  Even if I use my own code, I write the RAM code to=
 be an independent module and then instantiate that. =20

So what is your target?=20

--=20

  Rick C.

  - Get 2,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209

Article: 161443
Subject: Re: How to write a correct code to do 2 writes to an array on same cycle?
From: Weng Tianxiang <wtxwtx@gmail.com>
Date: Wed, 25 Sep 2019 10:31:30 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Wednesday, September 25, 2019 at 9:00:37 AM UTC-7, Rick C wrote:
> On Tuesday, September 24, 2019 at 2:49:14 PM UTC-4, Weng Tianxiang wrote:
> > Hi,
> >=20
> > Here is a code segment showing 2 methods doing 2 writes to an array wit=
h  2 different addresses on the same cycle:
> >=20
> > 1.=20
> > p1: process(CLK) is
> > begin
> >   if CLK'event and CLK =3D '1' then
> >     if C1 then
> >       An_Array(a) <=3D D1;
> >     end if;
> >=20
> >     -- I know a /=3D b
> >     -- do I need to inform VHDL compiler of 2 different addresses?
> >     if C2 then
> >       An_Array(b) <=3D D2; =20
> >     end if;
> >   end if;
> > end process;
>=20
> I think the short answer is YES, the compiler has no way of knowing you a=
re writing to different array elements.  More importantly what hardware are=
 you expecting this to synthesize?  Simulation is one thing, but I don't kn=
ow this sort of construct is clear enough to produce any particular hardwar=
e.  Also, are you targeting an FPGA or a custom chip?  I expect there to po=
tentially be a difference.=20
>=20
>=20
> > 2.=20
> > p2: process(CLK) is
> > begin
> >   if CLK'event and CLK =3D '1' then
> >     case C1 & C2 is
> >       when "10" =3D>
> >         An_Array(a) <=3D D1;
> >=20
> >       when "01" =3D>
> >         An_Array(b) <=3D D2;
> >=20
> >       when "11" =3D>
> >         -- I know a /=3D b
> >         -- do I need to inform VHDL compiler of 2 different addresses?
> >         An_Array(a) <=3D D1;
> >         An_Array(b) <=3D D2;
> >=20
> >       when others =3D>
> >         null;
> >     end case;
> >   end if;
> > end process;
> >=20
> > I think it is no problem with a simulator.
>=20
> Again, the same issue.  I believe this code is equivalent to the first ex=
ample.  The assignment statements tend to generate drivers and you may get =
two drivers on the address input to the array.  I expect you are intending =
to generate a dual port RAM with two separate inputs for address and data w=
ith separate write capability.  In FPGAs I use the sample code provided by =
the vendor to infer RAMs.  Even if I use my own code, I write the RAM code =
to be an independent module and then instantiate that. =20
>=20
> So what is your target?=20
>=20
> --=20
>=20
>   Rick C.
>=20
>   - Get 2,000 miles of free Supercharging
>   - Tesla referral code - https://ts.la/richard11209

Hi Rick,

Thank you for your answer.

Actually I am writing code not targeting any FPGA manufactures, neither Alt=
era nor Xilinx, I just want that the code can be simulated by Mentor simula=
tor to prove that my algorithm on a universal subject is working IN THEORY:=
 with 2 write ports for an array, everything is working. By doing so it wil=
l save me a lot of energy, cost and time.=20

Translating 2 writes at 2 different addresses for an array in code to a 2 w=
rite port memory is considered as a mature technology like to implement an =
64-bit adder if one decides to select a FPGA chip to implement.

I think based on the definition in a sequential process with or without clo=
ck, my code is working: if conditions C1 =3D '1' and C2 =3D '1', first the =
simulator by definition executes An_Array(a) <=3D D1; then An_Array(b) <=3D=
 D2; because a /=3D b, so An_Array gets its 2 writes without any problem in=
 simulation.

I know if I want the code to be executed on an Altera chip or a Xilinx chip=
, I have to use one of multiple possible 2 writes memory mechanisms to impl=
ement the code. But at this stage, I don't want to spend time doing it.

At this stage, I think I don't have to put consideration to it.

Weng

Article: 161444
Subject: Re: How to write a correct code to do 2 writes to an array on same cycle?
From: KJ <kkjennings@sbcglobal.net>
Date: Wed, 25 Sep 2019 10:47:00 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Wednesday, September 25, 2019 at 1:31:35 PM UTC-4, Weng Tianxiang wrote:
> On Wednesday, September 25, 2019 at 9:00:37 AM UTC-7, Rick C wrote:
> > On Tuesday, September 24, 2019 at 2:49:14 PM UTC-4, Weng Tianxiang=20
>=20
> I just want that the code can be simulated by Mentor simulator to prove t=
hat my algorithm on a universal subject is working IN THEORY: with 2 write =
ports for an array, everything is working. By doing so it will save me a lo=
t of energy, cost and time.=20

So, you want someone else to run a VHDL simulator for you in order to save =
you a lot of energy, cost and time?  Ummm...why, are you being lazy?  I can=
 pretty much guarantee that any VHDL simulator will properly execute the co=
de that you've written per the VHDL language standard.  Whether that code p=
erforms the function that you would like it for your "algorithm on a univer=
sal subject" is up to you to decide.  Best of luck on your new endeavors.

Kevin Jennings

Article: 161445
Subject: Re: How to write a correct code to do 2 writes to an array on same cycle?
From: Weng Tianxiang <wtxwtx@gmail.com>
Date: Wed, 25 Sep 2019 11:07:42 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Wednesday, September 25, 2019 at 10:47:04 AM UTC-7, KJ wrote:
> On Wednesday, September 25, 2019 at 1:31:35 PM UTC-4, Weng Tianxiang wrot=
e:
> > On Wednesday, September 25, 2019 at 9:00:37 AM UTC-7, Rick C wrote:
> > > On Tuesday, September 24, 2019 at 2:49:14 PM UTC-4, Weng Tianxiang=20
> >=20
> > I just want that the code can be simulated by Mentor simulator to prove=
 that my algorithm on a universal subject is working IN THEORY: with 2 writ=
e ports for an array, everything is working. By doing so it will save me a =
lot of energy, cost and time.=20
>=20
> So, you want someone else to run a VHDL simulator for you in order to sav=
e you a lot of energy, cost and time?  Ummm...why, are you being lazy?  I c=
an pretty much guarantee that any VHDL simulator will properly execute the =
code that you've written per the VHDL language standard.  Whether that code=
 performs the function that you would like it for your "algorithm on a univ=
ersal subject" is up to you to decide.  Best of luck on your new endeavors.
>=20
> Kevin Jennings

Hi KJ,

I never said that: "So, you want someone else to run a VHDL simulator for y=
ou in order to save you a lot of energy, cost and time?"

I will do the simulation by myself to determine if the algorithm is correct=
.

And I will let others to test if the code implementation on either a Xilinx=
 chip or an Altera chip is working.=20

A 2 write port memory is a mature technique and I don't have to spend a lot=
 of time to do it myself.

Thank you.

Weng

Article: 161446
Subject: Re: How to write a correct code to do 2 writes to an array on same cycle?
From: KJ <kkjennings@sbcglobal.net>
Date: Wed, 25 Sep 2019 12:34:09 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Wednesday, September 25, 2019 at 2:07:45 PM UTC-4, Weng Tianxiang wrote:
> 
> I never said that: "So, you want someone else to run a VHDL simulator for you in order to save you a lot of energy, cost and time?"
> 
I paraphrased slightly, but yes you did say that.  Read your original post which I quoted you on.

> I will do the simulation by myself to determine if the algorithm is correct.

Cool.
> 
> And I will let others to test if the code implementation on either a Xilinx chip or an Altera chip is working. 
> 
Beyond a superficial, quick code review, why would anyone want to do that?

> A 2 write port memory is a mature technique and I don't have to spend a lot of time to do it myself.
> 
And yet here you are, writing up your own code for two write port memory which you say "is a mature technique".  That's pretty much the definition of re-inventing the wheel.

Kevin Jennings

Article: 161447
Subject: Re: How to write a correct code to do 2 writes to an array on same cycle?
From: Weng Tianxiang <wtxwtx@gmail.com>
Date: Wed, 25 Sep 2019 14:22:31 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Wednesday, September 25, 2019 at 12:34:13 PM UTC-7, KJ wrote:
> On Wednesday, September 25, 2019 at 2:07:45 PM UTC-4, Weng Tianxiang wrote:
> > 
> > I never said that: "So, you want someone else to run a VHDL simulator for you in order to save you a lot of energy, cost and time?"
> > 
> I paraphrased slightly, but yes you did say that.  Read your original post which I quoted you on.
> 
> > I will do the simulation by myself to determine if the algorithm is correct.
> 
> Cool.
> > 
> > And I will let others to test if the code implementation on either a Xilinx chip or an Altera chip is working. 
> > 
> Beyond a superficial, quick code review, why would anyone want to do that?
> 
> > A 2 write port memory is a mature technique and I don't have to spend a lot of time to do it myself.
> > 
> And yet here you are, writing up your own code for two write port memory which you say "is a mature technique".  That's pretty much the definition of re-inventing the wheel.
> 
> Kevin Jennings

KJ,

I prefer for VHDL grammar to introduce a new statement, specifying an if statement is a second write to an array. Here is a code example on how to introduce such statement:

Here is a code segment showing method, doing 2 writes to an array with 2 different addresses on the same cycle:

p1: process(CLK) is
begin
  if CLK'event and CLK = '1' then
    if C1 then
      An_Array(a) <= D1;
    end if;

    -- "IF_2" is a new keyword which introduces a second write to an array in its full range, including all "else", "elsif" parts. And "if_2" keyword can only be used in a clocked process.
 
    if_2 C2 then
      An_Array(b) <= D2;  
    end if;
  end if;
end process;

Using the new suggested keyword "if_2" in VHDL, everybody would like it, not having repeatedly to write a 2-write-port memory for different FPGA chip.

In my design there are more than 10 arrays need 2-write port memory.

Weng

Article: 161448
Subject: New keyword "if_2" for HDL is suggested for dealing with 2-write port memory
From: Weng Tianxiang <wtxwtx@gmail.com>
Date: Wed, 25 Sep 2019 14:53:35 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hi,
In my opinion, using a 2-write port memory is a mature technique and its im=
plementation in any chip is never a secret. Hardware designers in HDL often=
 use 2-write port memory in their applications.

To relieve hardware designers from repeatedly writing complex code for a 2-=
write port memory, I suggest here for full HDL grammar spectrum to introduc=
e a new keyword "if_2" and new "if_2" statement, specifying a new if statem=
ent which has everything same as an if statement, but it specifies a second=
 write to an array. Here is a code example on how to introduce such stateme=
nt:

p1: process(CLK) is
begin
  if CLK'event and CLK =3D '1' then
    if C1 then
      An_Array(a) <=3D D1; -- it is first write to array An_Array
    end if;

-- "IF_2" is a new keyword which introduces a second write to an array in i=
ts full range, including all "else", "elsif" parts. And "if_2" keyword can =
only be used in a clocked process.

    if_2 C2 then
      An_Array(b) <=3D D2; -- it is a second write to array An_Array
    end if;
  end if;
end process;

If a 2nd write to an array does not need any condition, the statement can b=
e written as:

    if_2 '1' then
      An_Array(b) <=3D D2; -- it is a second write without any condition
    end if;

Using the new suggested keyword "if_2" in HDL, everybody would like it, not=
 having repeatedly to write a 2-write-port memory for different FPGA chips.

Weng

Article: 161449
Subject: Re: New keyword "if_2" for HDL is suggested for dealing with 2-write
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Wed, 25 Sep 2019 15:22:49 -0700 (PDT)
Links: << >>  << T >>  << A >>
On Wednesday, September 25, 2019 at 5:53:38 PM UTC-4, Weng Tianxiang wrote:
> Hi,
> In my opinion, using a 2-write port memory is a mature technique and its =
implementation in any chip is never a secret. Hardware designers in HDL oft=
en use 2-write port memory in their applications.
>=20
> To relieve hardware designers from repeatedly writing complex code for a =
2-write port memory, I suggest here for full HDL grammar spectrum to introd=
uce a new keyword "if_2" and new "if_2" statement, specifying a new if stat=
ement which has everything same as an if statement, but it specifies a seco=
nd write to an array. Here is a code example on how to introduce such state=
ment:
>=20
> p1: process(CLK) is
> begin
>   if CLK'event and CLK =3D '1' then
>     if C1 then
>       An_Array(a) <=3D D1; -- it is first write to array An_Array
>     end if;
>=20
> -- "IF_2" is a new keyword which introduces a second write to an array in=
 its full range, including all "else", "elsif" parts. And "if_2" keyword ca=
n only be used in a clocked process.
>=20
>     if_2 C2 then
>       An_Array(b) <=3D D2; -- it is a second write to array An_Array
>     end if;
>   end if;
> end process;
>=20
> If a 2nd write to an array does not need any condition, the statement can=
 be written as:
>=20
>     if_2 '1' then
>       An_Array(b) <=3D D2; -- it is a second write without any condition
>     end if;
>=20
> Using the new suggested keyword "if_2" in HDL, everybody would like it, n=
ot having repeatedly to write a 2-write-port memory for different FPGA chip=
s.
>=20
> Weng

Can you give an example of the code this would replace???  I don't remember=
 two port memory code being all that complex. =20

--=20

  Rick C.

  - Get 2,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search