Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 161575

Article: 161575
Subject: Re: New coding method for a state machine in groups in HDL
From: Richard Damon <Richard@Damon-Family.org>
Date: Sat, 30 Nov 2019 20:52:44 -0500
Links: << >>  << T >>  << A >>
On 11/30/19 1:40 PM, Rick C wrote:
> On Saturday, November 30, 2019 at 11:33:45 AM UTC-5, Richard Damon wrote:
>> On 11/30/19 9:55 AM, Rick C wrote:
>>> On Saturday, November 30, 2019 at 9:16:40 AM UTC-5, Richard Damon wrote:
>>>>
>>>> There are FPGAs which provide a 'gated clock' but they do the gating at
>>>> the row or region driver level, as that is where you get better power
>>>> savings (driving the clock tree is a significant use of power, while
>>>> gating at the flip-flop level saves virtually nothing, if it doesn't
>>>> cost you due to the extra logic, if it needs a LUT to gate, you have lost)
>>>>
>>>> These gated clock drivers tend to have de-glitching logic on them that
>>>> makes them safe to use (gate signal low keeps the output clock from
>>>> going high but doesn't force a high output low). This says that you can
>>>> save power if you know a whole section won't be changing for awhile, but
>>>> unlikely helps on a small state machine that occasionally doesn't
>>>> change, as the power in the change prediction logic may cost more than
>>>> the savings. Also, since these clock drivers are a limited critical
>>>> resource, you likely don't have enough to use it fine grain.
>>>
>>> Who makes these FPGAs?  Are these in the typical clock generators most FPGAs have only a handful of?  That's still better than no clock gating. 
>>>
>>
>> Microsemi, now part of Microchip. The gating is part of the regional
>> clock driving tree, so is somewhat limited, but not as limited as the
>> master clock generators (which also have gating).
> 
> Which products.  I was just looking at their site the other day and I didn't see anything remotely new.  Maybe I missed this?  Or is this not new? 
> 

I use it in the SmartFusion2 and Igloo2 products. I don't think they
hype the feature, so you need to dig into the Macro Function Library
documentation and the Clocking Resources Documentation to see it. All
the global/row global clock buffers have an enable option.


Article: 161576
Subject: Re: New coding method for a state machine in groups in HDL
From: Weng Tianxiang <wtxwtx@gmail.com>
Date: Mon, 2 Dec 2019 20:02:55 -0800 (PST)
Links: << >>  << T >>  << A >>
> And where are your experimental results that demonstrate something useful compared to current state of the art?  That's all I've been asking...for years
> 
> Kevin

KJ,

Why I don't like to directly answer your questions long time ago, for years, can be found in above question as another example, because your most questions are pointless and make nonsense as this question.

AS A PATENT, DO I NEED TO DO EXPERIMENTS TO DEMONSTRATE THAT MY CIRCUITS WOULD REDUCE POWER ASSUMPTION? A REALLY STUPID QUESTION!!!

As my first post shows that no one except me in the world who suggested or made any hits that any states in a state machine can be divided into groups AT ONE'S DISCRETION to reduce power consumption.

Here is a coding snippet for new method you can immediately understand what will happen for coding a state machine.

type State_Machine_t is (
  First_group : (s1, s2, s3),
  Second_Group : (s4, s5, s6),
  Third_Group : (s7, s8, s9)
);
signal State_Machine, State_Machine_Next : State_Machine_t ;

You can carefully compare my patent with following paper and find what is the essential differences:

Here is a famous paper about the similar method with 244 cites, using probability theory to divide states into group:
http://www.scarpaz.com/2100-papers/Low%20Power/00503933.pdf

The 244 cites itself demonstrate that in academic circle it once was a hot issue and a very important issue.

I fully perfect the method with simplest method and simplest logic!

I not only invent the method, even though it is trivial, but I invent systematic methods on how to implement state division with a set of diagrams. The new systematic methods can be immediately implemented by any synthesizer manufacturers.

I divide all state jumping signals of a state group into 4 categories:

1. Local jumping signal which has its current state and next state within the state group and its current state is different from its target state;

2. Holding jumping signal which has its current state and next state within the state group and its current state is the same as its target state;

3. Entering jumping signal which has its current state not in the state group, but its next state is within the state group;

4. Leaving jumping signal which has its current state in the group, but its next state is not within the group.

The concept is so simple that every experienced designer except you, KJ, who knew the above concept would have ideas on how to generate all related state machine circuits.

Weng



 

Article: 161577
Subject: Issue regarding boot qspi flash in zynq
From: richa2854@gmail.com
Date: Tue, 3 Dec 2019 01:53:10 -0800 (PST)
Links: << >>  << T >>  << A >>
Hey folks,
I need help regarding qspi flash boot with zynq .
1.I am using ZYNQ part number xc7z014sclg484-1 (active).

2.Software used:vivado 2019.1
3. qspi FLASH :W25Q128FWSIG

3.I am doing the qspi flash boot without DDR.The following procedure mentioned in https://www.xilinx.com/support/answers/56044.html used by me.

4.My FPGA VHDL code is blinking the led.
5. In sdk I am using helloworld program as a simple application program.

5. Bit stream is working successfully but helloworld.elf file is not loading in flash.

So could anybody tell me the procedure to make FSBL?

Article: 161578
Subject: Re: New coding method for a state machine in groups in HDL
From: KJ <kkjennings@sbcglobal.net>
Date: Tue, 3 Dec 2019 07:33:04 -0800 (PST)
Links: << >>  << T >>  << A >>
On Monday, December 2, 2019 at 11:02:59 PM UTC-5, Weng Tianxiang wrote:
> > And where are your experimental results that demonstrate something usef=
ul compared to current state of the art?  That's all I've been asking...for=
 years
> >=20

>=20
> Why I don't like to directly answer your questions long time ago, for yea=
rs, can be found in above question as another example, because your most qu=
estions are pointless and make nonsense as this question.
>=20
Two things can be concluded from this:
- When you make a claim but you have no evidence to support that claim, you=
 consider the request to provide evidence to support your claim to be point=
less and nonsense.
- You don't have any evidence to support your claims

This has been demonstrated to be true here on this thread and just about ev=
ery other posting you've done in newsgroups in the past.

> AS A PATENT, DO I NEED TO DO EXPERIMENTS TO DEMONSTRATE THAT MY CIRCUITS =
WOULD REDUCE POWER ASSUMPTION? A REALLY STUPID QUESTION!!!
So if I was interested in purchasing the rights to your patent and I asked =
for your test results that back your claim of reduced power consumption bef=
ore I would cut you a check, this is how you would respond.  That speaks vo=
lumes.

You come to this newsgroup and comp.lang.vhdl typically spouting garbage.  =
In a number of cases in the past I've shown your claims to be garbage by pr=
oviding evidence that completely contradicts your claim.  You ignore that e=
vidence but can't refute it.  When I ask for your test data that supports y=
our claim, you go off on a rant like you've done here again, because you ha=
ve nothing.

Why don't you find and post to some 'Snake Oil Salesman' newsgroup instead?=
  Maybe there is a sub-group for rude salesman that would suit you.  Lookin=
g here for a shill here isn't getting you anywhere.

You have nothing technical to back up anything you say here, and that is li=
kely true for any of your patents as well.

Done with this thread with you.  Good luck hawking your patents.  You shoul=
d probably hope that nobody searches your name and runs across your newsgro=
up postings.

Kevin Jennings

Article: 161579
Subject: Re: Efinix and their new Trion FPGAs -
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Tue, 3 Dec 2019 12:25:41 -0800 (PST)
Links: << >>  << T >>  << A >>
On Friday, November 29, 2019 at 2:51:51 AM UTC-5, Brane2 wrote:
> I'm talking about these guys:
>=20
> https://www.efinixinc.com
>=20
> Their Trion program seems interesting:
>=20
> - it stretches from area that is occupied by Lattice's MachXO3 on the low=
 end and ECP5 on hight end
> - no onboard FLASH.Just OTP on few small models and nothing on high end
> - universal tile that can do routing as well as LUT/MEM/logic
> - 5 bit BLOCK RAM instead of traditional 9-bit
> - additional simpplifications on the process end claim 4x chip price redu=
ction ( only 7 metalisation layers instead of 14 etc)
>=20
> Trion on first impression looks nice, but:
>=20
> - a bit slower than ECP5
> - based on digikey prices, not cheaper than EPC5, or even pricier at some=
 points
>=20
> has anyone used them and has some data to share on the matter ?

I dug into the parts a bit more and if I bite the bullet with the package, =
the T8 looks pretty good.  But it has some limitations. =20

There is 12kB of block RAM available in widths that include x5 and multiple=
s.  A bit odd, but potentially useful.  So the RAM could be 12 kW of 10 bit=
s.  As instruction memory that can be useful.  I just wish it had a bit mor=
e memory. =20

In the 81 pin BGA package it has only 1 of the five PLLs available and that=
 is "simple" whatever that means.  That package only has 55 GPIOs.  Many of=
 the I/O features are not available in the smaller packages.  The 144QFP is=
 a monster and isn't of much use to me. =20

Packaging is a PITA.=20

--=20

  Rick C.

  -- Get 1,000 miles of free Supercharging
  -- Tesla referral code - https://ts.la/richard11209

Article: 161580
Subject: Re: Efinix and their new Trion FPGAs -
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Wed, 4 Dec 2019 19:16:38 -0800 (PST)
Links: << >>  << T >>  << A >>
On Tuesday, December 3, 2019 at 3:25:45 PM UTC-5, Rick C wrote:
> On Friday, November 29, 2019 at 2:51:51 AM UTC-5, Brane2 wrote:
> > I'm talking about these guys:
> >=20
> > https://www.efinixinc.com
> >=20
> > Their Trion program seems interesting:
> >=20
> > - it stretches from area that is occupied by Lattice's MachXO3 on the l=
ow end and ECP5 on hight end
> > - no onboard FLASH.Just OTP on few small models and nothing on high end
> > - universal tile that can do routing as well as LUT/MEM/logic
> > - 5 bit BLOCK RAM instead of traditional 9-bit
> > - additional simpplifications on the process end claim 4x chip price re=
duction ( only 7 metalisation layers instead of 14 etc)
> >=20
> > Trion on first impression looks nice, but:
> >=20
> > - a bit slower than ECP5
> > - based on digikey prices, not cheaper than EPC5, or even pricier at so=
me points
> >=20
> > has anyone used them and has some data to share on the matter ?
>=20
> I dug into the parts a bit more and if I bite the bullet with the package=
, the T8 looks pretty good.  But it has some limitations. =20
>=20
> There is 12kB of block RAM available in widths that include x5 and multip=
les.  A bit odd, but potentially useful.  So the RAM could be 12 kW of 10 b=
its.  As instruction memory that can be useful.  I just wish it had a bit m=
ore memory. =20
>=20
> In the 81 pin BGA package it has only 1 of the five PLLs available and th=
at is "simple" whatever that means.  That package only has 55 GPIOs.  Many =
of the I/O features are not available in the smaller packages.  The 144QFP =
is a monster and isn't of much use to me. =20
>=20
> Packaging is a PITA.=20

I tried accessing the web site to get pricing at their store.  After days t=
hey have not approved my login so I can visit their store.  A sales person =
sent me an email but has not responded to my reply.  Strange.  I was hoping=
 to get pricing info on some larger parts.=20

--=20

  Rick C.

  -+ Get 1,000 miles of free Supercharging
  -+ Tesla referral code - https://ts.la/richard11209

Article: 161581
Subject: Anybody used Amazon AWS for HW sims?
From: Kevin Neilson <kevin.neilson@xilinx.com>
Date: Wed, 4 Dec 2019 20:13:53 -0800 (PST)
Links: << >>  << T >>  << A >>
Has anybody used Amazon AWS for FPGA hardware sims?  I don't know anything =
about it and I tried to read up on it but it's all a bit vague.  Amazon has=
 farms of FPGAs (Xilinx, I think?) but they mostly market these as software=
 accelerators, meant to be somewhat abstracted from most users.  But I'm wo=
ndering if I can upload bitfiles and do hardware sims.  For example, to cha=
racterize some of my error correction modules, I need to put in on the orde=
r of 1e16 bits, which would take maybe months in the simulator, or maybe a =
day in full-speed hardware (using a synthesizable testbench).  Is that some=
thing I could do with these AWS farms?  In this case, it wouldn't even have=
 to use the same FPGA parts I'm targeting, nor would it need to run at full=
 speed.

Furthermore, can AWS be used for synthesis/PAR?  Regression Verilog sims?=
=20

Article: 161582
Subject: Re: Efinix and their new Trion FPGAs -
From: Michael Kellett <mk@mkesc.co.uk>
Date: Thu, 5 Dec 2019 08:47:22 +0000
Links: << >>  << T >>  << A >>
On 05/12/2019 03:16, Rick C wrote:
> On Tuesday, December 3, 2019 at 3:25:45 PM UTC-5, Rick C wrote:
>> On Friday, November 29, 2019 at 2:51:51 AM UTC-5, Brane2 wrote:
>>> I'm talking about these guys:
>>>
>>> https://www.efinixinc.com
>>>
>>> Their Trion program seems interesting:
>>>
>>> - it stretches from area that is occupied by Lattice's MachXO3 on the low end and ECP5 on hight end
>>> - no onboard FLASH.Just OTP on few small models and nothing on high end
>>> - universal tile that can do routing as well as LUT/MEM/logic
>>> - 5 bit BLOCK RAM instead of traditional 9-bit
>>> - additional simpplifications on the process end claim 4x chip price reduction ( only 7 metalisation layers instead of 14 etc)
>>>
>>> Trion on first impression looks nice, but:
>>>
>>> - a bit slower than ECP5
>>> - based on digikey prices, not cheaper than EPC5, or even pricier at some points
>>>
>>> has anyone used them and has some data to share on the matter ?
>>
>> I dug into the parts a bit more and if I bite the bullet with the package, the T8 looks pretty good.  But it has some limitations.
>>
>> There is 12kB of block RAM available in widths that include x5 and multiples.  A bit odd, but potentially useful.  So the RAM could be 12 kW of 10 bits.  As instruction memory that can be useful.  I just wish it had a bit more memory.
>>
>> In the 81 pin BGA package it has only 1 of the five PLLs available and that is "simple" whatever that means.  That package only has 55 GPIOs.  Many of the I/O features are not available in the smaller packages.  The 144QFP is a monster and isn't of much use to me.
>>
>> Packaging is a PITA.
> 
> I tried accessing the web site to get pricing at their store.  After days they have not approved my login so I can visit their store.  A sales person sent me an email but has not responded to my reply.  Strange.  I was hoping to get pricing info on some larger parts.
> 
They only approved my login after a fairly stroppy email. I think that 
currently Efinix fail all tests for being ready to do business.

I can't understand the package thing - I thought that Microchip had 
proved that getting design-ins was key to getting business and making it 
easy to get started was the key to getting design-ins.

Sub 1mm pitch BGA packages add huge amounts of cost right at the very 
front end of a project - often the bit when you're using the tea and 
coffee budget to try and get the bosses interested. But 0.5mm pitch TQFP 
and QFN can be hand soldered. I'm amazed that the aspiring newcomers to 
the FPGA market aren't more actively filling that gap.

MK

Article: 161583
Subject: Re: Anybody used Amazon AWS for HW sims?
From: Theo <theom+news@chiark.greenend.org.uk>
Date: 05 Dec 2019 10:29:55 +0000 (GMT)
Links: << >>  << T >>  << A >>
Kevin Neilson <kevin.neilson@xilinx.com> wrote:
> Has anybody used Amazon AWS for FPGA hardware sims?  I don't know anything
> about it and I tried to read up on it but it's all a bit vague.  Amazon
> has farms of FPGAs (Xilinx, I think?) but they mostly market these as
> software accelerators, meant to be somewhat abstracted from most users. 
> But I'm wondering if I can upload bitfiles and do hardware sims.  For
> example, to characterize some of my error correction modules, I need to
> put in on the order of 1e16 bits, which would take maybe months in the
> simulator, or maybe a day in full-speed hardware (using a synthesizable
> testbench).  Is that something I could do with these AWS farms?  In this
> case, it wouldn't even have to use the same FPGA parts I'm targeting, nor
> would it need to run at full speed.

I've not used them, but AIUI they provide scaffolding for building
a component that sits inside an F1 FPGA.  In other words you don't get full
access to all the pins etc, but your HDL sits inside that scaffolding and
uses things like virtual LEDs and PCIe transactions that are communicated to
your app on the CPU side via the AWS FPGA IP.

But inside the FPGA logic you can mostly do whatever you want - subject to
not violating constraints like taking too much power.

> Furthermore, can AWS be used for synthesis/PAR?  Regression Verilog sims? 

For driving F1 instances, the tools are free.  For other things, as long as
you can sort the licensing setup I don't see why not.

https://github.com/aws/aws-fpga
has lots of info.

Theo

Article: 161584
Subject: Re: Anybody used Amazon AWS for HW sims?
From: Kevin Neilson <kevin.neilson@xilinx.com>
Date: Thu, 5 Dec 2019 08:54:46 -0800 (PST)
Links: << >>  << T >>  << A >>
On Thursday, December 5, 2019 at 3:30:01 AM UTC-7, Theo wrote:
> Kevin Neilson <kevin.neilson@xilinx.com> wrote:
> > Has anybody used Amazon AWS for FPGA hardware sims?  I don't know anything
> > about it and I tried to read up on it but it's all a bit vague.  Amazon
> > has farms of FPGAs (Xilinx, I think?) but they mostly market these as
> > software accelerators, meant to be somewhat abstracted from most users. 
> > But I'm wondering if I can upload bitfiles and do hardware sims.  For
> > example, to characterize some of my error correction modules, I need to
> > put in on the order of 1e16 bits, which would take maybe months in the
> > simulator, or maybe a day in full-speed hardware (using a synthesizable
> > testbench).  Is that something I could do with these AWS farms?  In this
> > case, it wouldn't even have to use the same FPGA parts I'm targeting, nor
> > would it need to run at full speed.
> 
> I've not used them, but AIUI they provide scaffolding for building
> a component that sits inside an F1 FPGA.  In other words you don't get full
> access to all the pins etc, but your HDL sits inside that scaffolding and
> uses things like virtual LEDs and PCIe transactions that are communicated to
> your app on the CPU side via the AWS FPGA IP.
> 
> But inside the FPGA logic you can mostly do whatever you want - subject to
> not violating constraints like taking too much power.
> 
> > Furthermore, can AWS be used for synthesis/PAR?  Regression Verilog sims? 
> 
> For driving F1 instances, the tools are free.  For other things, as long as
> you can sort the licensing setup I don't see why not.
> 
> https://github.com/aws/aws-fpga
> has lots of info.
> 
> Theo

Thanks; I looked briefly at your link but it all sounds like it has a steep learning curve.  Could you describe what an "F1 instance" is?  The Amazon documentation is peppered with references to F1 without really explaining it.

Article: 161585
Subject: Re: Efinix and their new Trion FPGAs -
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Thu, 5 Dec 2019 09:02:26 -0800 (PST)
Links: << >>  << T >>  << A >>
On Thursday, December 5, 2019 at 3:47:35 AM UTC-5, Michael Kellett wrote:
> On 05/12/2019 03:16, Rick C wrote:
> > On Tuesday, December 3, 2019 at 3:25:45 PM UTC-5, Rick C wrote:
> >> On Friday, November 29, 2019 at 2:51:51 AM UTC-5, Brane2 wrote:
> >>> I'm talking about these guys:
> >>>
> >>> https://www.efinixinc.com
> >>>
> >>> Their Trion program seems interesting:
> >>>
> >>> - it stretches from area that is occupied by Lattice's MachXO3 on the=
 low end and ECP5 on hight end
> >>> - no onboard FLASH.Just OTP on few small models and nothing on high e=
nd
> >>> - universal tile that can do routing as well as LUT/MEM/logic
> >>> - 5 bit BLOCK RAM instead of traditional 9-bit
> >>> - additional simpplifications on the process end claim 4x chip price =
reduction ( only 7 metalisation layers instead of 14 etc)
> >>>
> >>> Trion on first impression looks nice, but:
> >>>
> >>> - a bit slower than ECP5
> >>> - based on digikey prices, not cheaper than EPC5, or even pricier at =
some points
> >>>
> >>> has anyone used them and has some data to share on the matter ?
> >>
> >> I dug into the parts a bit more and if I bite the bullet with the pack=
age, the T8 looks pretty good.  But it has some limitations.
> >>
> >> There is 12kB of block RAM available in widths that include x5 and mul=
tiples.  A bit odd, but potentially useful.  So the RAM could be 12 kW of 1=
0 bits.  As instruction memory that can be useful.  I just wish it had a bi=
t more memory.
> >>
> >> In the 81 pin BGA package it has only 1 of the five PLLs available and=
 that is "simple" whatever that means.  That package only has 55 GPIOs.  Ma=
ny of the I/O features are not available in the smaller packages.  The 144Q=
FP is a monster and isn't of much use to me.
> >>
> >> Packaging is a PITA.
> >=20
> > I tried accessing the web site to get pricing at their store.  After da=
ys they have not approved my login so I can visit their store.  A sales per=
son sent me an email but has not responded to my reply.  Strange.  I was ho=
ping to get pricing info on some larger parts.
> >=20
> They only approved my login after a fairly stroppy email. I think that=20
> currently Efinix fail all tests for being ready to do business.
>=20
> I can't understand the package thing - I thought that Microchip had=20
> proved that getting design-ins was key to getting business and making it=
=20
> easy to get started was the key to getting design-ins.
>=20
> Sub 1mm pitch BGA packages add huge amounts of cost right at the very=20
> front end of a project - often the bit when you're using the tea and=20
> coffee budget to try and get the bosses interested. But 0.5mm pitch TQFP=
=20
> and QFN can be hand soldered. I'm amazed that the aspiring newcomers to=
=20
> the FPGA market aren't more actively filling that gap.

I hadn't thought of it that way.  I suppose some number of projects do star=
t with midnight requisitioned, unsanctioned efforts.  Still, as the Xilinx =
reps pointed out some time ago, each package supported adds cost, not just =
up front, but continuing costs.  So any start up will be cautious with addi=
ng new packages. =20

Gowin and AGM both are supporting these easier to work with packages, but i=
t is still unclear how "real" they are.  As I've posted I have actually had=
 a phone conference with Gowin representatives, but I get zero response fro=
m AGM.  I don't recall what packages Anlogic has, but other than a board at=
 Seeed has no presence. =20

I'm glad I'm not alone in preferring the QFN/QFP packages. =20

--=20

  Rick C.

  +- Get 1,000 miles of free Supercharging
  +- Tesla referral code - https://ts.la/richard11209

Article: 161586
Subject: Enabler for New FPGA Companies
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Thu, 5 Dec 2019 09:22:46 -0800 (PST)
Links: << >>  << T >>  << A >>
There seem to be a spate of new FPGA companies coming out.  The road block =
to new FPGA companies has always been two things, patents and software... a=
nd maybe software patents.  lol =20

So what has changed? =20

For sure FPGAs have been around long enough that many of the basic patents =
have expired.  The LUT/FF combo that is the basis for all FPGAs has been av=
ailable for some time now.  Has some other fundamental patent expired more =
recently to allow these companies to rise from the primordial soup? =20

Then there is the effort required to develop the software to design these p=
arts.  I suppose with the various third party tools that cost has been redu=
ced compared to starting from scratch, but it still has to cost a lot unles=
s they are mimicking other, existing devices which I don't think they are d=
oing, are they? =20

In the last 20 years I don't know that I've seen any new FPGA companies oth=
er than mergers.  Now there are at least four newcomers to the business.  W=
here/how did this happen? =20

--=20

  Rick C.

  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209

Article: 161587
Subject: Re: Anybody used Amazon AWS for HW sims?
From: Theo <theom+news@chiark.greenend.org.uk>
Date: 05 Dec 2019 22:18:39 +0000 (GMT)
Links: << >>  << T >>  << A >>
Kevin Neilson <kevin.neilson@xilinx.com> wrote:
> Thanks; I looked briefly at your link but it all sounds like it has a
> steep learning curve.  Could you describe what an "F1 instance" is?  The
> Amazon documentation is peppered with references to F1 without really
> explaining it.

AWS EC2 offers you a range of servers, labelled with various letters and
numbers.  They can be either VMs or dedicated servers:
https://aws.amazon.com/ec2/instance-types/

An 'instance' is a particular virtual/dedicated server.  If your workload
requires a thousand copies of your VM running in parallel, each one is an
instance.

'f1' is their particular name for their dedicated servers with FPGAs.  For
example 'f1.2xlarge' is an 8-core server with one FPGA, 'f1.16xlarge' is a
64 core server (or cluster) with 8 FPGAs. At present they're using Virtex
Ultrascale+.

'F1' more generally is the name of their project that provides FPGAs in
their cloud servers.  So people talk about 'running it in Amazon F1' when
they mean the whole action of renting a server from Amazon by the hour to
run their workload on the FPGA in that machine.

Theo

Article: 161588
Subject: Re: Enabler for New FPGA Companies
From: Kevin Neilson <kevin.neilson@xilinx.com>
Date: Thu, 5 Dec 2019 14:39:42 -0800 (PST)
Links: << >>  << T >>  << A >>
On Thursday, December 5, 2019 at 10:22:50 AM UTC-7, Rick C wrote:
> There seem to be a spate of new FPGA companies coming out.  The road bloc=
k to new FPGA companies has always been two things, patents and software...=
 and maybe software patents.  lol =20
>=20
> So what has changed? =20
>=20
> For sure FPGAs have been around long enough that many of the basic patent=
s have expired.  The LUT/FF combo that is the basis for all FPGAs has been =
available for some time now.  Has some other fundamental patent expired mor=
e recently to allow these companies to rise from the primordial soup? =20
>=20
> Then there is the effort required to develop the software to design these=
 parts.  I suppose with the various third party tools that cost has been re=
duced compared to starting from scratch, but it still has to cost a lot unl=
ess they are mimicking other, existing devices which I don't think they are=
 doing, are they? =20
>=20
> In the last 20 years I don't know that I've seen any new FPGA companies o=
ther than mergers.  Now there are at least four newcomers to the business. =
 Where/how did this happen? =20
>=20
> --=20
>=20
>   Rick C.
>=20
>   - Get 1,000 miles of free Supercharging
>   - Tesla referral code - https://ts.la/richard11209

How are they handling synthesis?  Are they getting Synplify or somebody els=
e to do that?  I would think synthesis software would be one of the biggest=
 roadblocks. 

Article: 161589
Subject: Re: Anybody used Amazon AWS for HW sims?
From: Kevin Neilson <kevin.neilson@xilinx.com>
Date: Thu, 5 Dec 2019 17:09:44 -0800 (PST)
Links: << >>  << T >>  << A >>
On Thursday, December 5, 2019 at 3:18:44 PM UTC-7, Theo wrote:
> Kevin Neilson <kevin.neilson@xilinx.com> wrote:
> > Thanks; I looked briefly at your link but it all sounds like it has a
> > steep learning curve.  Could you describe what an "F1 instance" is?  The
> > Amazon documentation is peppered with references to F1 without really
> > explaining it.
> 
> AWS EC2 offers you a range of servers, labelled with various letters and
> numbers.  They can be either VMs or dedicated servers:
> https://aws.amazon.com/ec2/instance-types/
> 
> An 'instance' is a particular virtual/dedicated server.  If your workload
> requires a thousand copies of your VM running in parallel, each one is an
> instance.
> 
> 'f1' is their particular name for their dedicated servers with FPGAs.  For
> example 'f1.2xlarge' is an 8-core server with one FPGA, 'f1.16xlarge' is a
> 64 core server (or cluster) with 8 FPGAs. At present they're using Virtex
> Ultrascale+.
> 
> 'F1' more generally is the name of their project that provides FPGAs in
> their cloud servers.  So people talk about 'running it in Amazon F1' when
> they mean the whole action of renting a server from Amazon by the hour to
> run their workload on the FPGA in that machine.
> 
> Theo

Thanks for the explanation.  I'm going to have to learn some more.  It sounds like this is not really targeted toward FPGA designers, but it could still be useful.  After you get the framework set up, it should be easier for successive testbenches.

Article: 161590
Subject: Re: Enabler for New FPGA Companies
From: Adrian Byszuk <adrian.byszuk@gmail.com>
Date: Fri, 6 Dec 2019 08:24:02 -0800 (PST)
Links: << >>  << T >>  << A >>
On Thursday, December 5, 2019 at 11:39:45 PM UTC+1, Kevin Neilson wrote:
> 
> How are they handling synthesis?  Are they getting Synplify or somebody else to do that?  I would think synthesis software would be one of the biggest roadblocks.

That's just a guess, but I wouldn't be suprised if they used open source Yosys suite.
It has already proven to be quite reliable tool.



Article: 161591
Subject: Re: Efinix and their new Trion FPGAs -
From: Michael Kellett <mk@mkesc.co.uk>
Date: Wed, 11 Dec 2019 11:31:31 +0000
Links: << >>  << T >>  << A >>
On 05/12/2019 17:02, Rick C wrote:
> On Thursday, December 5, 2019 at 3:47:35 AM UTC-5, Michael Kellett wrote:
>> On 05/12/2019 03:16, Rick C wrote:
>>> On Tuesday, December 3, 2019 at 3:25:45 PM UTC-5, Rick C wrote:
>>>> On Friday, November 29, 2019 at 2:51:51 AM UTC-5, Brane2 wrote:
>>>>> I'm talking about these guys:
>>>>>
>>>>> https://www.efinixinc.com
>>>>>
>>>>> Their Trion program seems interesting:
>>>>>
>>>>> - it stretches from area that is occupied by Lattice's MachXO3 on the low end and ECP5 on hight end
>>>>> - no onboard FLASH.Just OTP on few small models and nothing on high end
>>>>> - universal tile that can do routing as well as LUT/MEM/logic
>>>>> - 5 bit BLOCK RAM instead of traditional 9-bit
>>>>> - additional simpplifications on the process end claim 4x chip price reduction ( only 7 metalisation layers instead of 14 etc)
>>>>>
>>>>> Trion on first impression looks nice, but:
>>>>>
>>>>> - a bit slower than ECP5
>>>>> - based on digikey prices, not cheaper than EPC5, or even pricier at some points
>>>>>
>>>>> has anyone used them and has some data to share on the matter ?
>>>>
>>>> I dug into the parts a bit more and if I bite the bullet with the package, the T8 looks pretty good.  But it has some limitations.
>>>>
>>>> There is 12kB of block RAM available in widths that include x5 and multiples.  A bit odd, but potentially useful.  So the RAM could be 12 kW of 10 bits.  As instruction memory that can be useful.  I just wish it had a bit more memory.
>>>>
>>>> In the 81 pin BGA package it has only 1 of the five PLLs available and that is "simple" whatever that means.  That package only has 55 GPIOs.  Many of the I/O features are not available in the smaller packages.  The 144QFP is a monster and isn't of much use to me.
>>>>
>>>> Packaging is a PITA.
>>>
>>> I tried accessing the web site to get pricing at their store.  After days they have not approved my login so I can visit their store.  A sales person sent me an email but has not responded to my reply.  Strange.  I was hoping to get pricing info on some larger parts.
>>>
>> They only approved my login after a fairly stroppy email. I think that
>> currently Efinix fail all tests for being ready to do business.
>>
>> I can't understand the package thing - I thought that Microchip had
>> proved that getting design-ins was key to getting business and making it
>> easy to get started was the key to getting design-ins.
>>
>> Sub 1mm pitch BGA packages add huge amounts of cost right at the very
>> front end of a project - often the bit when you're using the tea and
>> coffee budget to try and get the bosses interested. But 0.5mm pitch TQFP
>> and QFN can be hand soldered. I'm amazed that the aspiring newcomers to
>> the FPGA market aren't more actively filling that gap.
> 
> I hadn't thought of it that way.  I suppose some number of projects do start with midnight requisitioned, unsanctioned efforts.  Still, as the Xilinx reps pointed out some time ago, each package supported adds cost, not just up front, but continuing costs.  So any start up will be cautious with adding new packages.
> 
> Gowin and AGM both are supporting these easier to work with packages, but it is still unclear how "real" they are.  As I've posted I have actually had a phone conference with Gowin representatives, but I get zero response from AGM.  I don't recall what packages Anlogic has, but other than a board at Seeed has no presence.
> 
> I'm glad I'm not alone in preferring the QFN/QFP packages.
> 
You might almost think Lattice are listening:

The Crosslink 17 and 40 will be available in 72 pin QFN, as well as all 
the BGA packages.
You only get 40 IO pins out of the 72.


MK


Article: 161592
Subject: Re: Efinix and their new Trion FPGAs -
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Thu, 12 Dec 2019 17:53:59 -0800 (PST)
Links: << >>  << T >>  << A >>
On Friday, November 29, 2019 at 2:51:51 AM UTC-5, Brane2 wrote:
> I'm talking about these guys:
>=20
> https://www.efinixinc.com
>=20
> Their Trion program seems interesting:
>=20
> - it stretches from area that is occupied by Lattice's MachXO3 on the low=
 end and ECP5 on hight end
> - no onboard FLASH.Just OTP on few small models and nothing on high end
> - universal tile that can do routing as well as LUT/MEM/logic
> - 5 bit BLOCK RAM instead of traditional 9-bit
> - additional simpplifications on the process end claim 4x chip price redu=
ction ( only 7 metalisation layers instead of 14 etc)
>=20
> Trion on first impression looks nice, but:
>=20
> - a bit slower than ECP5
> - based on digikey prices, not cheaper than EPC5, or even pricier at some=
 points
>=20
> has anyone used them and has some data to share on the matter ?

Looking at their manuals they seem to not provide any simulation capability=
.  They say they are compatible with iVerilog and don't mention VHDL anywhe=
re in their documentation that I can find.  I guess they only support Veril=
og and you are on your own for the simulation capability. =20

Interesting.=20

--=20

  Rick C.

  ++ Get 1,000 miles of free Supercharging
  ++ Tesla referral code - https://ts.la/richard11209

Article: 161593
Subject: Re: Efinix and their new Trion FPGAs -
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Thu, 12 Dec 2019 18:02:32 -0800 (PST)
Links: << >>  << T >>  << A >>
On Thursday, December 12, 2019 at 8:54:02 PM UTC-5, Rick C wrote:
> On Friday, November 29, 2019 at 2:51:51 AM UTC-5, Brane2 wrote:
> > I'm talking about these guys:
> >=20
> > https://www.efinixinc.com
> >=20
> > Their Trion program seems interesting:
> >=20
> > - it stretches from area that is occupied by Lattice's MachXO3 on the l=
ow end and ECP5 on hight end
> > - no onboard FLASH.Just OTP on few small models and nothing on high end
> > - universal tile that can do routing as well as LUT/MEM/logic
> > - 5 bit BLOCK RAM instead of traditional 9-bit
> > - additional simpplifications on the process end claim 4x chip price re=
duction ( only 7 metalisation layers instead of 14 etc)
> >=20
> > Trion on first impression looks nice, but:
> >=20
> > - a bit slower than ECP5
> > - based on digikey prices, not cheaper than EPC5, or even pricier at so=
me points
> >=20
> > has anyone used them and has some data to share on the matter ?
>=20
> Looking at their manuals they seem to not provide any simulation capabili=
ty.  They say they are compatible with iVerilog and don't mention VHDL anyw=
here in their documentation that I can find.  I guess they only support Ver=
ilog and you are on your own for the simulation capability. =20
>=20
> Interesting.=20

I finally heard back from them about my login approval.  But there's nothin=
g to see.  Their datasheets used to be behind the login and now they are no=
t.  It got me into their store but they only have sample kits, so you still=
 need to contact a sales person for pricing and I'm not clear what this say=
s about availability of the T13 and T20.  Digikey only has the T4 and T8.=
=20

All the sample boxes have 10 chips and are $100. =20

The salesman wants to talk to me.  I guess I'll need to do that.=20

--=20

  Rick C.

  --- Get 1,000 miles of free Supercharging
  --- Tesla referral code - https://ts.la/richard11209

Article: 161594
Subject: Can't get image from PCam 5C running on Zybo Z7-20 with petalinux
From: Swapnil Patil <swapnilpatil27497@gmail.com>
Date: Sat, 14 Dec 2019 04:13:24 -0800 (PST)
Links: << >>  << T >>  << A >>
Hello folks,


I am running Zybo Z7-20 Pcam petalinux project on Zybo Z7-20 board.

From README.md file below command will create 14 image files in current directory

 yavta -c14 -f YUYV -s "$width"x"$height" -F /dev/video0

when we test similarly our log stucks as per below

root@test0:~# yavta -c1 -f YUYV -s "$width"x"$height" -F /dev/video0
Device /dev/video0 opened.
Device `video_cap output 0' on `platform:video_cap:0' is a video output (without mplanes) device.
Video format set: YUYV (56595559) 1280x720 field none, 1 planes:
 * Stride 2560, buffer size 1843200
Video format: YUYV (56595559) 1280x720 field none, 1 planes:
 * Stride 2560, buffer size 1843200
8 buffers requested.
length: 1 offset: 3201100384 timestamp type/source: mono/EoF
Buffer 0/0 mapped at address 0xb6c29000.
length: 1 offset: 3201100384 timestamp type/source: mono/EoF
Buffer 1/0 mapped at address 0xb6a67000.
length: 1 offset: 3201100384 timestamp type/source: mono/EoF
Buffer 2/0 mapped at address 0xb68a5000.
length: 1 offset: 3201100384 timestamp type/source: mono/EoF
Buffer 3/0 mapped at address 0xb66e3000.
length: 1 offset: 3201100384 timestamp type/source: mono/EoF
Buffer 4/0 mapped at address 0xb6521000.
length: 1 offset: 3201100384 timestamp type/source: mono/EoF
Buffer 5/0 mapped at address 0xb635f000.
length: 1 offset: 3201100384 timestamp type/source: mono/EoF
Buffer 6/0 mapped at address 0xb619d000.
length: 1 offset: 3201100384 timestamp type/source: mono/EoF
Buffer 7/0 mapped at address 0xb5fdb000.

from this we are not sure whether the images created or not.

when we run media device information command(media-ctl -p)

we get following log

root@test0:~# media-ctl -p
Media controller API version 4.9.0

Media device information
------------------------
driver          xilinx-video
model           Xilinx Video Composite Device
serial          
bus info        
hw revision     0x0
driver version  4.9.0

Device topology
- entity 1: video_cap output 0 (1 pad, 1 link)
            type Node subtype V4L flags 0
            device node name /dev/video0
    pad0: Sink
        <- "43c60000.mipi_csi2_rx_subsystem":0 [ENABLED]

- entity 5: ov5640 1-003c (1 pad, 1 link)
            type V4L2 subdev subtype Sensor flags 0
            device node name /dev/v4l-subdev0
    pad0: Source
        [fmt:UYVY/1280x720 field:none]
        -> "43c60000.mipi_csi2_rx_subsystem":1 [ENABLED]

- entity 7: 43c60000.mipi_csi2_rx_subsystem (2 pads, 2 links)
            type V4L2 subdev subtype Unknown flags 0
            device node name /dev/v4l-subdev1
    pad0: Source
        [fmt:UYVY/1280x720 field:none]
        -> "video_cap output 0":0 [ENABLED]
    pad1: Sink
        [fmt:UYVY/1280x720 field:none]
        <- "ov5640 1-003c":0 [ENABLED]

when we run v4l2-ctl --all,


root@test0:~# v4l2-ctl --all
Driver Info (not using libv4l2):
    Driver name   : xilinx-vipp
    Card type     : video_cap output 0
    Bus info      : platform:video_cap:0
    Driver version: 4.9.0
    Capabilities  : 0x84201000
        Video Capture Multiplanar
        Streaming
        Extended Pix Format
        Device Capabilities
    Device Caps   : 0x04201000
        Video Capture Multiplanar
        Streaming
        Extended Pix Format
Priority: 2
Format Video Capture Multiplanar:
    Width/Height      : 1280/720
    Pixel Format      : 'YUYV'
    Field             : None
    Number of planes  : 1
    Flags             :
    Colorspace        : Default
    Transfer Function : Default
    YCbCr Encoding    : Default
    Quantization      : Default
    Plane 0           :
       Bytes per Line : 2560
       Size Image     : 1843200
root@test0:~#

Thanks in advance.

Article: 161595
Subject: Optimizations, How Much and When?
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Sat, 4 Jan 2020 11:59:11 -0800 (PST)
Links: << >>  << T >>  << A >>
My projects typically are implemented in small devices and so are often spa=
ce constrained.  So I am in the habit of optimizing my code for implemented=
 size.  I've learned techniques that minimize size and tend to use them eve=
rywhere rather than the 20/80 rule of optimizing the 20% of the code where =
you get 80% of the benefit, or 10/90 or whatever your favorite numbers are.=
 =20

I remember interviewing at a Japanese based company once where they worked =
differently.  They were designing large projects and felt it was counter pr=
oductive to worry with optimizations of any sort.  They wanted fast turn ar=
ound on their projects and so just paid more for larger and faster parts I =
suppose.  In fact in the interview I was asked my opinion and gave it.  A l=
ead engineer responded with a mini-lecture about how that was not productiv=
e in their work.  I responded with my take which was that once a few techni=
ques were learned about optimal coding techniques it was not time consuming=
 to make significant gains in logic size and that these same techniques pro=
vided a consistent coding style which allowed faster debug times.  Turns ou=
t a reply was not expected and in fact, went against a cultural difference =
resulting in no offer from this company. =20

I'm wondering where others' opinions and practice fall in this area.  I ass=
ume consistent coding practices are always encouraged.  How often do these =
practices include techniques to minimize size or increase speed?  What tech=
niques are used to improve debugging?=20

--=20

  Rick C.

  - Get 1,000 miles of free Supercharging
  - Tesla referral code - https://ts.la/richard11209

Article: 161596
Subject: Re: Optimizations, How Much and When?
From: pault.eg@googlemail.com
Date: Sun, 5 Jan 2020 04:40:17 -0800 (PST)
Links: << >>  << T >>  << A >>
On Saturday, January 4, 2020 at 7:59:14 PM UTC, Rick C wrote:

>=20
> I'm wondering where others' opinions and practice fall in this area.  I a=
ssume consistent coding practices are always encouraged.  How often do thes=
e practices include techniques to minimize size or increase speed?  What te=
chniques are used to improve debugging?=20
>=20

I size optimised this https://www.p-code.org/s430/ by leaving a few things =
out, using an 8-bit ALU rather than a 16-bit ALU, and matching the register=
 file to the machxo3 FPGA architecture so that it was a good fit for the sm=
all FPGAs in that series. But that's not work related. That's from a small =
personal interest in small micros in small FPGAs that can be used with GCC =
:)

Otherwise I've been lucky enough not to have to worry too much about optimi=
sing for size.

Article: 161597
Subject: Re: Optimizations, How Much and When?
From: David Brown <david.brown@hesbynett.no>
Date: Sun, 5 Jan 2020 16:29:49 +0100
Links: << >>  << T >>  << A >>
On 04/01/2020 20:59, Rick C wrote:
> My projects typically are implemented in small devices and so are
> often space constrained.  So I am in the habit of optimizing my code
> for implemented size.  I've learned techniques that minimize size and
> tend to use them everywhere rather than the 20/80 rule of optimizing
> the 20% of the code where you get 80% of the benefit, or 10/90 or
> whatever your favorite numbers are.

This seems a little mixed up.  The "20/80" (or whatever) rule is for 
/speed/ optimisation of software code.  The principle is that if your 
program spends most of its time in 10% or 20% of the code, then that is 
the only part you need to optimise for speed.  The rest of the code can 
be optimised for size.  Optimising only 10% or 20% of your code for size 
is pointless.

> 
> I remember interviewing at a Japanese based company once where they
> worked differently.  They were designing large projects and felt it
> was counter productive to worry with optimizations of any sort.  They
> wanted fast turn around on their projects and so just paid more for
> larger and faster parts I suppose.  In fact in the interview I was
> asked my opinion and gave it.  A lead engineer responded with a
> mini-lecture about how that was not productive in their work.  I
> responded with my take which was that once a few techniques were
> learned about optimal coding techniques it was not time consuming to
> make significant gains in logic size and that these same techniques
> provided a consistent coding style which allowed faster debug times.
> Turns out a reply was not expected and in fact, went against a
> cultural difference resulting in no offer from this company.
> 
> I'm wondering where others' opinions and practice fall in this area.
> I assume consistent coding practices are always encouraged.  How
> often do these practices include techniques to minimize size or
> increase speed?  What techniques are used to improve debugging?
> 

I can only really tell you about optimisation for software designs, 
rather than for programmable logic, but some things could be equally 
applicable.

First, you have to distinguish between different types of 
"optimisation".  The world simply means that you have a strong focus on 
one aspect of the development, nothing more.  You can optimise for code 
speed, development speed, power requirements, safety, flexibility, or 
any one of dozens of different aspects.  Some of these are run-time 
(like code speed), some are development time (like ease of debugging). 
You rarely really want to optimise, you just want to prioritise how you 
balance the different aspects of the development.

There are also many ways of considering optimisation for any one aspect. 
  Typically, there are some things that involve lots of work, some 
things that involve knowledge and experience, and some things that can 
be automated.

If we take the simple case of "code speed" optimisations, there are 
perhaps three main possibilities.

1. You can make a point of writing code that runs quickly.  This is a 
matter of ability, knowledge and experience for the programmer.  The 
programmer knows when to use data of different types, based on what will 
work well on the target device.  He/she knows what algorithms make 
sense.  He/she knows when to use multi-tasking and when to use a single 
threaded system - and so on.  There is rarely any disadvantage in doing 
this sort of thing, unless it becomes /too/ smart - then it can lead to 
maintainability issues if the original developer is not available.

2. You can enable compiler optimisations.  This is usually a cheap step 
- it's just a compiler flag.  Aggressive optimisations can make code 
harder to debug, but enabling basic optimisations typically makes it 
easier.  It also improves static analysis and warnings, which is always 
a good thing.  But there can be problems if the developers are not 
sufficiently trained or experienced, and tend to write "it worked when I 
tried it" code rather than knowing that their code is correct.

3. You can do a lot of work measuring performance and investigating 
different ways of handling the task at hand.  This can lead to the 
biggest gains in speed - but also takes the most time and effort for 
developers.


I expect the optimisations you are thinking of for programmable logic 
follow a similar pattern.

And I think a fair amount of apparent disagreements about "optimisation" 
comes mainly from a misunderstanding about types of optimisation, and 
which type is under discussion.




Article: 161598
Subject: Re: Optimizations, How Much and When?
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Sun, 5 Jan 2020 10:24:07 -0800 (PST)
Links: << >>  << T >>  << A >>
On Sunday, January 5, 2020 at 10:29:53 AM UTC-5, David Brown wrote:
> On 04/01/2020 20:59, Rick C wrote:
> > My projects typically are implemented in small devices and so are
> > often space constrained.  So I am in the habit of optimizing my code
> > for implemented size.  I've learned techniques that minimize size and
> > tend to use them everywhere rather than the 20/80 rule of optimizing
> > the 20% of the code where you get 80% of the benefit, or 10/90 or
> > whatever your favorite numbers are.
>=20
> This seems a little mixed up.  The "20/80" (or whatever) rule is for=20
> /speed/ optimisation of software code.  The principle is that if your=20
> program spends most of its time in 10% or 20% of the code, then that is=
=20
> the only part you need to optimise for speed.  The rest of the code can=
=20
> be optimised for size.  Optimising only 10% or 20% of your code for size=
=20
> is pointless.

Lol!  I guess you don't really understand HDL.=20


> > I remember interviewing at a Japanese based company once where they
> > worked differently.  They were designing large projects and felt it
> > was counter productive to worry with optimizations of any sort.  They
> > wanted fast turn around on their projects and so just paid more for
> > larger and faster parts I suppose.  In fact in the interview I was
> > asked my opinion and gave it.  A lead engineer responded with a
> > mini-lecture about how that was not productive in their work.  I
> > responded with my take which was that once a few techniques were
> > learned about optimal coding techniques it was not time consuming to
> > make significant gains in logic size and that these same techniques
> > provided a consistent coding style which allowed faster debug times.
> > Turns out a reply was not expected and in fact, went against a
> > cultural difference resulting in no offer from this company.
> >=20
> > I'm wondering where others' opinions and practice fall in this area.
> > I assume consistent coding practices are always encouraged.  How
> > often do these practices include techniques to minimize size or
> > increase speed?  What techniques are used to improve debugging?
> >=20
>=20
> I can only really tell you about optimisation for software designs,=20
> rather than for programmable logic,=20

Yes, that much we understand.=20


> but some things could be equally=20
> applicable.
>=20
> First, you have to distinguish between different types of=20
> "optimisation".  The world simply means that you have a strong focus on=
=20
> one aspect of the development, nothing more.  You can optimise for code=
=20
> speed, development speed, power requirements, safety, flexibility, or=20
> any one of dozens of different aspects.  Some of these are run-time=20
> (like code speed), some are development time (like ease of debugging).=20
> You rarely really want to optimise, you just want to prioritise how you=
=20
> balance the different aspects of the development.

There is your first mistake.  Optimizing code can involve tradeoffs between=
, size, speed, power consumption, etc., but can also involve finding ways t=
o improve multiple aspects without tradeoffs. =20

If you are going to talk about a tradeoff between code development time (th=
inking) and other parametrics, then you are simply describing the process o=
f optimization.  Duh!=20


> There are also many ways of considering optimisation for any one aspect.=
=20
>   Typically, there are some things that involve lots of work, some=20
> things that involve knowledge and experience, and some things that can=20
> be automated.
>=20
> If we take the simple case of "code speed" optimisations, there are=20
> perhaps three main possibilities.
>=20
> 1. You can make a point of writing code that runs quickly.  This is a=20
> matter of ability, knowledge and experience for the programmer.  The=20
> programmer knows when to use data of different types, based on what will=
=20
> work well on the target device.  He/she knows what algorithms make=20
> sense.  He/she knows when to use multi-tasking and when to use a single=
=20
> threaded system - and so on.  There is rarely any disadvantage in doing=
=20
> this sort of thing, unless it becomes /too/ smart - then it can lead to=
=20
> maintainability issues if the original developer is not available.

Not at all rare that speed optimizations can create issues in other areas. =
 So that's your second mistake.=20


> 2. You can enable compiler optimisations.  This is usually a cheap step=
=20
> - it's just a compiler flag.  Aggressive optimisations can make code=20
> harder to debug, but enabling basic optimisations typically makes it=20
> easier.  It also improves static analysis and warnings, which is always=
=20
> a good thing.  But there can be problems if the developers are not=20
> sufficiently trained or experienced, and tend to write "it worked when I=
=20
> tried it" code rather than knowing that their code is correct.
>=20
> 3. You can do a lot of work measuring performance and investigating=20
> different ways of handling the task at hand.  This can lead to the=20
> biggest gains in speed - but also takes the most time and effort for=20
> developers.

Which is the basis of the 20/80 rule.  Don't spend time optimizing code tha=
t isn't going to give a good return.=20


> I expect the optimisations you are thinking of for programmable logic=20
> follow a similar pattern.
>=20
> And I think a fair amount of apparent disagreements about "optimisation"=
=20
> comes mainly from a misunderstanding about types of optimisation, and=20
> which type is under discussion.

I think you didn't really read my original post where I mentioned using opt=
imization techniques consistently in my coding as a matter of habit.  Rathe=
r than brute force an algorithm into code in the most direct way, I try to =
visualize the hardware that will implement the task and then use HDL to des=
cribe that hardware.  The alternative is to code the algorithm directly and=
 let the tools try to sort it out in an efficient manner which often fails =
because they are constrained to implement exactly what you have coded. =20

Once I couldn't figure out why I was getting two adder chains when I expect=
ed one chain with a carry output.  Seems I had made some tiny distinction b=
etween the two pieces of code so the adders were not identical.  So now my =
habit is to code the actual adder in a separate line of code outside the pr=
ocess that is using it assuring it is the same adder for both uses of the r=
esults.=20

This is why it's still the 20/80 rule since there are large sections of cod=
e that don't have much to gain from optimization.  But a halving of the siz=
e is significant for the sections of code that can benefit. =20

--=20

  Rick C.

  + Get 1,000 miles of free Supercharging
  + Tesla referral code - https://ts.la/richard11209

Article: 161599
Subject: Re: Optimizations, How Much and When?
From: Rick C <gnuarm.deletethisbit@gmail.com>
Date: Sun, 5 Jan 2020 10:33:01 -0800 (PST)
Links: << >>  << T >>  << A >>
On Sunday, January 5, 2020 at 7:40:21 AM UTC-5, pau...@googlemail.com wrote=
:
> On Saturday, January 4, 2020 at 7:59:14 PM UTC, Rick C wrote:
>=20
> >=20
> > I'm wondering where others' opinions and practice fall in this area.  I=
 assume consistent coding practices are always encouraged.  How often do th=
ese practices include techniques to minimize size or increase speed?  What =
techniques are used to improve debugging?=20
> >=20
>=20
> I size optimised this https://www.p-code.org/s430/ by leaving a few thing=
s out, using an 8-bit ALU rather than a 16-bit ALU, and matching the regist=
er file to the machxo3 FPGA architecture so that it was a good fit for the =
small FPGAs in that series. But that's not work related. That's from a smal=
l personal interest in small micros in small FPGAs that can be used with GC=
C :)
>=20
> Otherwise I've been lucky enough not to have to worry too much about opti=
mising for size.

Interesting effort.  I'm surprised the result is so small.  I'm also surpri=
sed the Cyclone V result is smaller than the Artix 7 result.  Any idea why =
the register count varies?  Usually the register count is fixed by the code=
.  Did the tools use register splitting for speed?=20

Does it take a lot of cycles to run code?=20

--=20

  Rick C.

  -- Get 1,000 miles of free Supercharging
  -- Tesla referral code - https://ts.la/richard11209



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search