Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 156350

Article: 156350
Subject: Re: cloud design flow
From: HT-Lab <hans64@htminuslab.com>
Date: Thu, 13 Mar 2014 15:37:38 +0000
Links: << >>  << T >>  << A >>
On 13/03/2014 09:00, alb wrote:
> alb <al.basili@gmail.com> wrote:
>> I was wondering if there's any one out there working with a cloud design
>> flow which is tool agnostic.
> []
>
> uhm, considering the amount of replies to this thread I have only two
> possibilities to choose from:
>
> 1. nobody is using such paradigm in this community
> 2. nobody using this paradigm is willing to share it with this community
>
> In both cases I'm out of luck and guess that if I ever want to go down
> this path I'll be doing it on my own.
>
Hi Al,

It is not a paradigm, it is simply not a viable business model for most 
EDA companies. If you look at Plunify for example you will see they only 
managed to signed up 2 companies since 2011(?) and neither of them is 
any of the big 3.

In addition most companies prefer to keep their regression testing 
in-house. Splashing their crown jewels onto some cloud server in a 
foreign country is simply too risky. Also, you nearly always have to 
tweak your regression suite and doing it in the cloud might add an extra 
layer of hassle.

Regards,
Hans
www.ht-lab.com




Article: 156351
Subject: Re: cloud design flow
From: al.basili@gmail.com (alb)
Date: 13 Mar 2014 16:32:21 GMT
Links: << >>  << T >>  << A >>
HT-Lab <hans64@htminuslab.com> wrote:
> On 13/03/2014 09:00, alb wrote:
>> alb <al.basili@gmail.com> wrote:
>>> I was wondering if there's any one out there working with a cloud design
>>> flow which is tool agnostic.
[]

> It is not a paradigm, it is simply not a viable business model for most 
> EDA companies. If you look at Plunify for example you will see they only 
> managed to signed up 2 companies since 2011(?) and neither of them is 
> any of the big 3.

This is quite interesting indeed, but surprising as well. When it comes 
to high computation demand it seems so obvious to move to a cloud 
service. Actually before the buzzword 'cloud' jumped in it was called 
'grid', where several farms/computer centers were connected together in 
order to increase throughput.

> In addition most companies prefer to keep their regression testing 
> in-house. Splashing their crown jewels onto some cloud server in a 
> foreign country is simply too risky. 

I honestly find this quite insane, the same companies are exchanging 
confidential information via much less secure means like emails, faxes 
or phone lines, without anyone bothering about encryption.

A two keys encryption mechanism would be safe enough to prevent any 
intrusion. But if you really want to get paranoid there are other 
technologies available like quantum criptography.

> Also, you nearly always have to 
> tweak your regression suite and doing it in the cloud might add an extra 
> layer of hassle.

this I may take as an argument. Nevertheless I find it quite strange that each 
one should build its own computer farm to handle big regression suites with all 
the maintenance cost it goes with it.

Not only that, how many in-house solutions we have to develop and debug 
constantly? On top of that in-house solutions are rarely scalable unless 
a big effort is put in it from the very beginning, again requiring a 
large investment in the IT department, ending up with more network 
specialists than FPGA designers.

Article: 156352
Subject: Re: cloud design flow
From: Sean Durkin <news_MONTH@tuxroot.de>
Date: Sun, 16 Mar 2014 12:06:43 +0100
Links: << >>  << T >>  << A >>
alb wrote:
>> In addition most companies prefer to keep their regression testing 
>> in-house. Splashing their crown jewels onto some cloud server in a 
>> foreign country is simply too risky. 
> 
> I honestly find this quite insane, the same companies are exchanging 
> confidential information via much less secure means like emails, faxes 
> or phone lines, without anyone bothering about encryption.
> 
> A two keys encryption mechanism would be safe enough to prevent any 
> intrusion. But if you really want to get paranoid there are other 
> technologies available like quantum criptography.

The problem is not so much getting the data to the cloud service
securely. The problem is trusting the cloud company to protect the data
once it is on their servers.

Since Snowden it is known that all the major companies that provide
cloud services are constantly subjected to court orders forcing them to
give the authorities access to pretty much anything that is stored on
their servers or otherwise handled there, and to the information as to
who is doing what with their services.

And since Snowden it is know that this is not only done if there are
suscpicions of crime or terrorism, but also as a tool of corporate
espionage (see http://tinyurl.com/o5cuneg).

So for any European company it is more or less out of the question to
use cloud services for anything important anymore these days. Especially
not cloud services that are offered by companies that are US-based. Not
because they are evil, but because they are subject to US laws that
force them to grant access and disclose pretty much anything.

So, cloud storage? Maybe, but only if data is encrypted properly BEFORE
it is put there. Cloud computing for my simulations and synthesis? No
way am I uploading my project data anywhere I don't have complete
control over.

This may sound like paranoia, but it is not. It seems that nowadays
every week there's a new revelation proving reality is even worse than
what paranoid luncatics have had nightmares about for decades...

Fortunately, my designs are not big enough to require server farms, so
for me personally cloud services are of no interest anyway.


Article: 156353
Subject: how add an IP on vivado for Nexys4
From: marouen.arfaoui.42@gmail.com
Date: Sun, 16 Mar 2014 16:04:04 -0700 (PDT)
Links: << >>  << T >>  << A >>
hello 

i have Nexys4 Board (Artix7) and i want to add an IP to microblaze 
can you help me?

Article: 156354
Subject: CoreABC from Microsemi
From: al.basili@gmail.com (alb)
Date: 17 Mar 2014 08:30:43 GMT
Links: << >>  << T >>  << A >>
Hi everyone,

I'm currently laying down the architecture of an FPGA which is the main 
controller for a set of mechanisms and we wanted to profit of an amba 
ahb to interconnect various elements [1], included a slightly modified 
microblaze core (MB) with an additional floating point unit.

The system does need to have an overall controller/sequencer or whatever 
you want to call it in order to coordinate the various subsystems 
(included the MB) in a sort of repeated cycle.

The above mentionned controller can be written with a simple state 
machine, but now that we want to interconnect everything with an ahb we 
have to be compliant to it.

I have found recently that microsemi provides an IP called CoreABC which 
is a sort of lightweight processor which may be trimmed down to execute 
a very limited set of the microcode and can be interfaced with the bus.

Does anyone here have some experience with it? Is it recommended for 
this type of scenario or we would be better of with a simple FSM which 
can handle the ahb protocol?

Any suggestion/comment/rant is appreciated.

Al

[1] the decision is motivated to reduce the amount of integration time 
and optimize the verification effort since last project we ended up with 
a maze of wires all interwined and with an infinite work spent on 
numerous testbenches.

-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

Article: 156355
Subject: [cross-post]path verification
From: al.basili@gmail.com (alb)
Date: 17 Mar 2014 15:44:41 GMT
Links: << >>  << T >>  << A >>
Dear all,

I have a microcontroller with an FPU which is delivered as an IP (I mean 
the FPU). In order to run at a decent frequency, some of the operations 
are allowed to complete in within a certain amount of cycles, but the 
main problem is that we do not know how many.

That said, if we run the synthesis tool without timing constraints on 
those paths, we have a design that is much slower than can be. 
Multicycle constraints are out of question because they are hard to 
verify and maintain, so we decided to set false paths and perform 
post-layout sims to extract those values to be used in the RTL in a 
second iteration.

There are several reasons why I do not particularly like this approach:

1. it relies on post-layout sims which are resource consuming 
2. if we change technology we will likely need to do the whole process 
   again 
3. we are obliged to perform incremental place&route since an optimized
   implementation (maybe done automatically) may have an impact on our
   delays.

So far we have not come out with an alternative solution that is not 
going to imply redesign (like pipelining, c-slowing, retiming, ...).

Any ideas/suggestions?

Al

-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

Article: 156356
Subject: Re: full functional coverage
From: Jim Lewis <usevhdl@gmail.com>
Date: Mon, 17 Mar 2014 09:11:05 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hi Al,
> Another issue is the coverage collection. Imagine I have my set of units=
=20
> all individually tested, all of them happily reporting some sort of=20
> functional coverage. First of all I do not now why the heck we collect=20
> coverage if we do not have a spec to compare it with and second of all=20
> how shall I collect coverage of a specific unit when it's integrated in=
=20
> the overall system? Does it make any sense to do it?
Functional coverage (the one you write - such as with OSVVM) tells you whet=
her you have exercised all of the items in your test plan.  Your test may s=
et out to achieve some objectives - the functional coverage observes these =
happening and validates that your directed test really did what it said it =
was going to do or that your random test did something useful.   For more o=
n this, see osvvm dot org and SynthWorks' OSVVM blog. =20

Code coverage (what the tool captures for you) tells you that you have exec=
uted all lines of code.  If your functional coverage is 100%, but your code=
 coverage is not, it indicates there is logic there that is not covered by =
your test plan - either extra logic (which is bad) or your test plan is inc=
omplete (which is also bad). =20

Functional coverage at 100% and Code coverage at 100% indicates that you ha=
ve completed validation of the design. =20

Best Regards,
Jim

Article: 156357
Subject: Re: full functional coverage
From: Jim Lewis <usevhdl@gmail.com>
Date: Mon, 17 Mar 2014 09:12:26 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hi Al,
Looking at your second question:

> A purely system level approach might have too poor=20
> observability/controllability at unit level and would not be efficient=20
> to spot unit level problems, especially in the very beginning of the=20
> coding effort, where the debug cycle is very fast. But if I start to=20
> write unit level testbenches it would be unlikely that I will reuse=20
> those benches at system level.
For some ideas on this, see my paper titled,  "Accelerating Verification Th=
rough Pre-Use of System-Level Testbench Components" that is posted on Synth=
Works' papers page.

If you would like this in a class room setting, make sure to catch our VHDL=
 TLM + OSVVM World Tour (our class VHDL Testbenches and Verification).    N=
ext stop is in Sweden on May 5-9 and Germany in July 14-18.   http://www.sy=
nthworks.com/public_vhdl_courses.htm

Best Regards,
Jim

Article: 156358
Subject: Re: [cross-post]path verification
From: glen herrmannsfeldt <gah@ugcs.caltech.edu>
Date: Mon, 17 Mar 2014 19:31:55 +0000 (UTC)
Links: << >>  << T >>  << A >>
In comp.arch.fpga alb <al.basili@gmail.com> wrote:
 
> I have a microcontroller with an FPU which is delivered as an IP (I mean 
> the FPU). In order to run at a decent frequency, some of the operations 
> are allowed to complete in within a certain amount of cycles, but the 
> main problem is that we do not know how many.

So you paid someone for this?

I am not sure what you mean by "a certain number of clock cycles"
and "do not know how many".

If it is all combinatorial, it will complete with some delay, not
in some number of clock cycles. That is, the delay will not depend
on any clock you supply. You then have to either be able to run
the design through timing analysis and see how long that is, or the
ones you bought it from should tell you.

Though more usual, the logic should have a signal indicating when
the result is valid. 

You could run the FPU in the timing tools with a variety (random)
inputs and find out how long it takes. Then find the distribution
of delays, and find a reasonable maximum. It might be data dependent
and have a long tail. (A post-normalize shifter might depend on the
number of digits being shifted, and the rare long shifts would have
to be accounted for.)

> That said, if we run the synthesis tool without timing constraints on 
> those paths, we have a design that is much slower than can be. 
> Multicycle constraints are out of question because they are hard to 
> verify and maintain, so we decided to set false paths and perform 
> post-layout sims to extract those values to be used in the RTL in a 
> second iteration.
 
> There are several reasons why I do not particularly like this approach:
 
> 1. it relies on post-layout sims which are resource consuming 
> 2. if we change technology we will likely need to do the whole process 
>   again 
> 3. we are obliged to perform incremental place&route since an optimized
>   implementation (maybe done automatically) may have an impact on our
>   delays.
 
> So far we have not come out with an alternative solution that is not 
> going to imply redesign (like pipelining, c-slowing, retiming, ...).

The FPUs that I know of should be pipelined. (Is there a clock input?)
You shouldn't have to do the pipelining, but you do need to know the
number of clock cycles (and clock rate) for each operation. 

If the design is encrypted, such that you can't look at it, they
need to give you enough information to be able to use it.

-- glen

Article: 156359
Subject: Re: full functional coverage
From: al.basili@gmail.com (alb)
Date: 18 Mar 2014 09:09:47 GMT
Links: << >>  << T >>  << A >>
Hi Jim,
Jim Lewis <usevhdl@gmail.com> wrote:
> Functional coverage (the one you write - such as with OSVVM) tells you 
> whether you have exercised all of the items in your test plan.  Your 
> test may set out to achieve some objectives - the functional coverage 
> observes these happening and validates that your directed test really 
> did what it said it was going to do or that your random test did 
> something useful.  For more on this, see osvvm dot org and SynthWorks' 
> OSVVM blog.

let's take a bus arbiter for example. It will provide access to the bus 
to multiple 'masters' with a certain priority and/or schedule. The 
problem arises when the 'slave' does not reply or finish at the expected 
time. Let's assume the bus protocol allows the arbiter to 'put on hold' 
the current master and allow others to access the bus.

In my unit testing I have full controllability and I can simulate a non 
responsive slave, therfore verifying the correct behavior of the 
arbiter, but when I integrate the arbiter in a system with several 
slaves which have been properly verified, I do not have the freedom to 
force a slave in a non responsive state, therefore at system level I'm 
not able to cover that functionality (neither that code).

What happen then with my functional coverage? Should it be a collection 
of system level and unit level reports?

> Code coverage (what the tool captures for you) tells you that you have 
> executed all lines of code.  If your functional coverage is 100%, but 
> your code coverage is not, it indicates there is logic there that is 
> not covered by your test plan - either extra logic (which is bad) or 
> your test plan is incomplete (which is also bad).

What if my code has been exercised by a unit level test? Theoretically 
it has been exercised, but not in the system that it is supposed to work 
in. Does that coverage count?

> 
> Functional coverage at 100% and Code coverage at 100% indicates that 
> you have completed validation of the design.

There might be functionality that is not practically verifiable because 
it requires a too long simulation, whether it can be easily validated on 
the bench with hardware running at full speed. What about those cases?

Article: 156360
Subject: Re: full functional coverage
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Tue, 18 Mar 2014 09:56:07 +0000
Links: << >>  << T >>  << A >>
On 13/03/14 14:13, alb wrote:
> Hi everyone,
>
> I'm trying to understand how to improve our verification flow.
>
> As of now we have a great deal of unit level testbenches which are
> trying to make sure the unit is behaving correctly.
>
> Now, if I make a fair analysis of my above statement, I'm already doomed!
> What does it mean 'behaving correctly'? In our workflow we have an FPGA
> spec which is coming from /above/ and we have a verification matrix to
> define which verification method we apply for each requirement. The
> problem is: we do not have a unit level spec so how can we make sure the
> unit is correctly behaving?
>
> Moreover at the system level there are a certain number of scenarios
> which do not apply at unit level and viceversa, but the bottom line is
> that the system as a whole should be fully verified.
>
> Should I decline each system level requirement to a unit level one? That
> would be nearly as long as writing RTL code using only fabric's
> privitives.
>
> Another issue is the coverage collection. Imagine I have my set of units
> all individually tested, all of them happily reporting some sort of
> functional coverage. First of all I do not now why the heck we collect
> coverage if we do not have a spec to compare it with and second of all
> how shall I collect coverage of a specific unit when it's integrated in
> the overall system? Does it make any sense to do it?
>
> A purely system level approach might have too poor
> observability/controllability at unit level and would not be efficient
> to spot unit level problems, especially in the very beginning of the
> coding effort, where the debug cycle is very fast. But if I start to
> write unit level testbenches it would be unlikely that I will reuse
> those benches at system level.
>
> As you may have noticed I'm a bit confused and any pointer would be
> greatly appreciated.

Old engineering maxim: you can't test quality into a product.
(You have to design it into the product)

Don't confuse verification and validation. A crude distinction
is that one ensures you have the right design, the other
ensures you have implemented the design right. (I always forget
which is which, sigh!)

One designer's unit is another designer's system.

Unit tests are helpful but insufficient; you also need
system integration tests.


Article: 156361
Subject: Re: full functional coverage
From: al.basili@gmail.com (alb)
Date: 18 Mar 2014 10:01:25 GMT
Links: << >>  << T >>  << A >>
Hi Jim,
Jim Lewis <usevhdl@gmail.com> wrote:
[]
>> A purely system level approach might have too poor 
>> observability/controllability at unit level and would not be efficient 
>> to spot unit level problems, especially in the very beginning of the 
>> coding effort, where the debug cycle is very fast. But if I start to 
>> write unit level testbenches it would be unlikely that I will reuse 
>> those benches at system level.

> For some ideas on this, see my paper titled, "Accelerating 
> Verification Through Pre-Use of System-Level Testbench Components" 
> that is posted on SynthWorks' papers page.

I know that paper and it served me well in the past, but I have two 
issues with the proposed approach:
 
1. Following the example on the paper with the MemIO, the CpuIF has 
'two' interfaces, one towards the CPU, and another one toward the inner 
part of the entity. If we do not hook up the internal interface somehow 
we are not verifying the full unit. While in the example the internal 
interface might be very simple (like a list of registers), it might not 
be always the case.

2. As subblocks are available, together with testcases through the 
system level testbench, a complex configuration system has to be 
maintained in order to instantiate only what is needed. This overhead 
might be trivial if we have 4 subblocks, but it may pose several 
problems when the amount of them increases drastically.

I do not quite understand the reason to split testcases in separate 
architectures, I use to envelop the TbMemIO in what I call 'harness' and 
instantiate it in each of my testcases. The harness grows as BFMs become 
available and it never breaks an earlier testcase since the newly 
inserted BFM was not used in earlier testcases.

> If you would like this in a class room setting, make sure to catch our 
> VHDL TLM + OSVVM World Tour (our class VHDL Testbenches and 
> Verification).  Next stop is in Sweden on May 5-9 and Germany in July 
> 14-18.  http://www.synthworks.com/public_vhdl_courses.htm

Unfortunately cost and time are not yet available. I'd love to follow 
that course but as of now I need to rely on my own trials and 
guidance like yours ;-)

Article: 156362
Subject: data read write to DDR2 SDRAM memory between microblaze and custom IP using PLB Bus
From: "makni" <99260@embeddedrelated>
Date: Tue, 18 Mar 2014 07:58:57 -0500
Links: << >>  << T >>  << A >>
Hi everybody,

I implement an xps system by using the Bus PLB. My IP core is added to the
system using Create or Import Peripheral... 
I want to know how can Microblaze write several data to DDR2 SDRAM and how
the IP core read all the data from this memory, modify it and write it back
to DDR2 SDRAM via the PLB Bus interface.

Therefore, I would like to write the image data directly to DDR SDRAM from
my custom IP core. However, I am not sure how I can get my custom core to
write to the DDR using the PLB bus.
If anyone could point me in the right direction, I would really appreciate
it. I know that I will need to make my custom core a "Master" on the PLB
bus but I can't find a good example on how to read/write to memory.

Thanks in advance.

	   
					
---------------------------------------		
Posted through http://www.FPGARelated.com

Article: 156363
Subject: Re: data read write to DDR2 SDRAM memory between microblaze and custom
From: GaborSzakacs <gabor@alacron.com>
Date: Tue, 18 Mar 2014 09:47:03 -0400
Links: << >>  << T >>  << A >>
makni wrote:
> Hi everybody,
> 
> I implement an xps system by using the Bus PLB. My IP core is added to the
> system using Create or Import Peripheral... 
> I want to know how can Microblaze write several data to DDR2 SDRAM and how
> the IP core read all the data from this memory, modify it and write it back
> to DDR2 SDRAM via the PLB Bus interface.
> 
> Therefore, I would like to write the image data directly to DDR SDRAM from
> my custom IP core. However, I am not sure how I can get my custom core to
> write to the DDR using the PLB bus.
> If anyone could point me in the right direction, I would really appreciate
> it. I know that I will need to make my custom core a "Master" on the PLB
> bus but I can't find a good example on how to read/write to memory.
> 
> Thanks in advance.
> 
> 	   
> 					
> ---------------------------------------		
> Posted through http://www.FPGARelated.com

I think you're going about this the hard way.  The memory interface to
DDR2 for microblaze uses a multi-port memory controller (MPMC).  It
is easy to add another port to the MPMC for direct access to the
external memory from another IP core.  In the past I've found that
the "native port interface" (NPI) is easy enough to use, but there
are other choices in the newer versions that you might prefer.

If your IP also needs register access from the MB, then you can have
it also connect to the PLB as a slave.  However with the MPMC there is
no need to have a PLB master to access DDR2 memory.

-- 
Gabor

Article: 156364
Subject: license issue on synplify pro AE
From: al.basili@gmail.com (alb)
Date: 19 Mar 2014 16:29:01 GMT
Links: << >>  << T >>  << A >>
hy everyone,

we have a floating license for Libero IDE which has an 
ACTEL_SUMMIT feature that to my understanding should support the 
whole project flow, including license for ModelSim AE and 
Synplify Pro AE (Actel Edition).

I've correctly set up my LM_LICENSE_FILE env. variable and I'm 
able to get the license to run the Libero IDE (that I do not 
need much...). But when I try to run the synthesis tool I get an 
error message:

License for feature synplifypro is not available.

Unable to obtain license host ID.

Now I'm stuck! I've followed the troubleshooting guidelines from 
Microsemi and I've searched the net for the entire afternoon...

My lmutil provides the following information:

debian@debian:libero_v9.1$ lmutil lmstat -c $LM_LICENSE_FILE
lmutil - Copyright (c) 1989-2010 Flexera Software, Inc. All Rights Reserved.
Flexible License Manager status on Wed 3/19/2014 17:12

License server status: 1702@servicesserver
    License file(s) on servicesserver: D:\FlexLM\licenses\License_designer_floating_v9.dat:

servicesserver: license server UP (MASTER) v11.9

Vendor daemon status (on servicesserver):

  actlmgrd: UP v11.4

At a certain point I tried to change the -licensetype variable 
for synplify_pro (from command line), but no success with the 
following: synplifypro_actel, synplifypro_acteloem

There's still an item I haven't checked that just came to my 
mind, a firewall issue. AFAIK the flexlm handles the various 
vendor daemons and provide the client with a dynamic port on 
which to talk. If for some reason my pc cannot talk/listen (I'm 
not sure which) on that port then it would not work.

I may ask my sysadmin what is the flexlm configuration on the 
server side, even though I'm not sure she's going to be happy to 
provide that info! :-/

Any additional ideas?

Al

-- 
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
A: Top-posting.
Q: What is the most annoying thing on usenet and in e-mail?

Article: 156365
Subject: Re: license issue on synplify pro AE
From: HT-Lab <hans64@htminuslab.com>
Date: Wed, 19 Mar 2014 16:36:42 +0000
Links: << >>  << T >>  << A >>
On 19/03/2014 16:29, alb wrote:
> hy everyone,
>
> we have a floating license for Libero IDE which has an
> ACTEL_SUMMIT feature that to my understanding should support the
> whole project flow, including license for ModelSim AE and
> Synplify Pro AE (Actel Edition).

I don't think so, Mentor and Synplify have their own features and daemons.

I have these increments in my Microsemi license:

INCREMENT synplifydsp_actel snpslmd
INCREMENT synplifydspsl snpslmd
INCREMENT synplifypro_actel snpslmd
INCREMENT actelmsimvlog mgcld
INCREMENT actelmsimvhdl mgcld

Regards,
Hans.
www.ht-lab.com


>
> I've correctly set up my LM_LICENSE_FILE env. variable and I'm
> able to get the license to run the Libero IDE (that I do not
> need much...). But when I try to run the synthesis tool I get an
> error message:
>
> License for feature synplifypro is not available.
>
> Unable to obtain license host ID.
>
> Now I'm stuck! I've followed the troubleshooting guidelines from
> Microsemi and I've searched the net for the entire afternoon...
>
> My lmutil provides the following information:
>
> debian@debian:libero_v9.1$ lmutil lmstat -c $LM_LICENSE_FILE
> lmutil - Copyright (c) 1989-2010 Flexera Software, Inc. All Rights Reserved.
> Flexible License Manager status on Wed 3/19/2014 17:12
>
> License server status: 1702@servicesserver
>      License file(s) on servicesserver: D:\FlexLM\licenses\License_designer_floating_v9.dat:
>
> servicesserver: license server UP (MASTER) v11.9
>
> Vendor daemon status (on servicesserver):
>
>    actlmgrd: UP v11.4
>
> At a certain point I tried to change the -licensetype variable
> for synplify_pro (from command line), but no success with the
> following: synplifypro_actel, synplifypro_acteloem
>
> There's still an item I haven't checked that just came to my
> mind, a firewall issue. AFAIK the flexlm handles the various
> vendor daemons and provide the client with a dynamic port on
> which to talk. If for some reason my pc cannot talk/listen (I'm
> not sure which) on that port then it would not work.
>
> I may ask my sysadmin what is the flexlm configuration on the
> server side, even though I'm not sure she's going to be happy to
> provide that info! :-/
>
> Any additional ideas?
>
> Al
>


Article: 156366
Subject: Re: full functional coverage
From: Jim Lewis <usevhdl@gmail.com>
Date: Wed, 19 Mar 2014 12:28:20 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hi Al,
> ... Snip ...
> 
> What happen then with my functional coverage? Should it be a collection 
> of system level and unit level reports?
See next answer.

> What if my code has been exercised by a unit level test? Theoretically 
> it has been exercised, but not in the system that it is supposed to work 
> in. Does that coverage count?
Many do integrate the functional coverage of the core (unit) and system.
The risk is that the system is connected differently than the core level 
testbench.  However in your case you are testing for correctness and you 
get this at the system level by accessing the working slaves.

However, VHDL does give you a couple of good ways to test 
non-responsive slave model in the system.  First you can use about
VHDL configuration to swap in the non-responsive slave model for one
of the correct models.

Alternately (not my preference), you can use VHDL-2008 external names
and VHDL-2008 force commands to drive the necessary signals to make a 
correct model non-responsive.


> There might be functionality that is not practically verifiable because 
> it requires a too long simulation, whether it can be easily validated on 
> the bench with hardware running at full speed. What about those cases?
For ASICs, the general solution to this is to use an emulator or FPGA board
to test it.  In most cases, your lab board is as good as an emulator.  The
only case where emulators would be better is if they collect coverage and 
report it back in a form that can be integrated with your other tests.

Best Regards,
Jim

Article: 156367
Subject: Re: license issue on synplify pro AE
From: Brian Drummond <brian3@shapes.demon.co.uk>
Date: Wed, 19 Mar 2014 22:25:22 GMT
Links: << >>  << T >>  << A >>
On Wed, 19 Mar 2014 16:29:01 +0000, alb wrote:

> hy everyone,
> 
> we have a floating license for Libero IDE which has an ACTEL_SUMMIT
> feature that to my understanding should support the whole project flow,
> including license for ModelSim AE and Synplify Pro AE (Actel Edition).
> 
> I've correctly set up my LM_LICENSE_FILE env. variable and I'm able to
> get the license to run the Libero IDE (that I do not need much...). But
> when I try to run the synthesis tool I get an error message:
> 
> License for feature synplifypro is not available.
> 
> Unable to obtain license host ID.
> 
> Now I'm stuck! I've followed the troubleshooting guidelines from
> Microsemi and I've searched the net for the entire afternoon...


My own startup script for Libero on Debian includes the following:

-----------------------------------------------------------------
HOSTNAME=myhostname
LICENSE_PATH=/home/brian/Projects/SteepestAscent/Actel/Licensing
LICENSE_NAME=License2013.dat

lmgrd -c $LICENSE_PATH/$LICENSE_NAME &

# Setup licensefile path so that Modelsim can find the correct licenses
export LM_LICENSE_FILE=$LICENSE_PATH/$LICENSE_NAME

# Synplify needs port@hostname to find its license
export SNPSLMD_LICENSE_FILE=1702@$HOSTNAME
-----------------------------------------------------------------

a SNPSLMD_LICENSE_FILE env variable may help?

- Brian

Article: 156368
Subject: Re: license issue on synplify pro AE
From: al.basili@gmail.com (alb)
Date: 19 Mar 2014 23:18:54 GMT
Links: << >>  << T >>  << A >>
Hi Hans,
HT-Lab <hans64@htminuslab.com> wrote:
>> we have a floating license for Libero IDE which has an
>> ACTEL_SUMMIT feature that to my understanding should support the
>> whole project flow, including license for ModelSim AE and
>> Synplify Pro AE (Actel Edition).
> 
> I don't think so, Mentor and Synplify have their own features 
> and daemons.
> 
> I have these increments in my Microsemi license:
> 
> INCREMENT synplifydsp_actel snpslmd
> INCREMENT synplifydspsl snpslmd
> INCREMENT synplifypro_actel snpslmd
> INCREMENT actelmsimvlog mgcld
> INCREMENT actelmsimvhdl mgcld

If I lmstat the license server I get only actlmgrd. As you say I 
should have the other daemons running as well to provide the 
related features, while that's not the case.

I think at this point my only conclusion is that we have Libero 
SA (stand-alone) license which *do not include* neither 
synplify_pro nor modelsim. Those third-party tools are indeed 
available through the gold license though.

It is not yet clear to me if the gold/platinum difference w.r.t. 
third-party tools makes some difference in terms of available 
features.

Al

Article: 156369
Subject: Re: license issue on synplify pro AE
From: al.basili@gmail.com (alb)
Date: 19 Mar 2014 23:19:15 GMT
Links: << >>  << T >>  << A >>
Hi Brian,
Brian Drummond <brian3@shapes.demon.co.uk> wrote:
[]
>> we have a floating license for Libero IDE which has an ACTEL_SUMMIT
>> feature that to my understanding should support the whole project flow,
>> including license for ModelSim AE and Synplify Pro AE (Actel Edition).
[] 
> My own startup script for Libero on Debian includes the following:
> 

that's nice to see somebody else there sailing on unchartered 
waters with success! Same setup here, I'm only unhappy that 
openmotif is still in non-free archives which is a bit unfair 
being a full fledged LGPL package.

> -----------------------------------------------------------------
> HOSTNAME=myhostname
> LICENSE_PATH=/home/brian/Projects/SteepestAscent/Actel/Licensing
> LICENSE_NAME=License2013.dat
> 
> lmgrd -c $LICENSE_PATH/$LICENSE_NAME &
> 
> # Setup licensefile path so that Modelsim can find the correct licenses
> export LM_LICENSE_FILE=$LICENSE_PATH/$LICENSE_NAME
> 
> # Synplify needs port@hostname to find its license
> export SNPSLMD_LICENSE_FILE=1702@$HOSTNAME
> -----------------------------------------------------------------
> 
> a SNPSLMD_LICENSE_FILE env variable may help?

Thanks for the hint, I keep the env. variables in my .bashrc but 
it is pretty similar to what you write. I think the main problem 
lies on our floating license being a Libero SA only (see my 
previous reply in this thread). By the way, SNPLSMD_LICENSE_FILE 
is optional, you should be able to live without it.

Al

Article: 156370
Subject: Re: full functional coverage
From: al.basili@gmail.com (alb)
Date: 19 Mar 2014 23:19:40 GMT
Links: << >>  << T >>  << A >>
Hi Jim.
Jim Lewis <usevhdl@gmail.com> wrote:
[]
>> What if my code has been exercised by a unit level test? Theoretically 
>> it has been exercised, but not in the system that it is supposed to work 
>> in. Does that coverage count?
> Many do integrate the functional coverage of the core (unit) 
> and system. The risk is that the system is connected 
> differently than the core level testbench.

that is indeed yet another reason for 'fearing' the core level 
testbench approach indeed. Thanks for pointing that out.

> However, VHDL does give you a couple of good ways to test 
> non-responsive slave model in the system.  First you can use about
> VHDL configuration to swap in the non-responsive slave model for one
> of the correct models.

As easy as it sounds it looks to be the most elegant solution. A 
simple beh. model of a non responsive module may trigger various 
types of scenarios.

> 
> Alternately (not my preference), you can use VHDL-2008 external names
> and VHDL-2008 force commands to drive the necessary signals to make a 
> correct model non-responsive.

I've found a third option which might be quite interesting 
especially when dealing with standard buses:

klabs.org/richcontent/software_content/vhdl/force_errors.pdf ??? 

The interesting thing about this technique is how you can 
randomly generate errors on a bus without the need to model them. 
You do need to model the coverage though.

>> There might be functionality that is not practically 
>> verifiable because it requires a too long simulation, whether 
>> it can be easily validated on the bench with hardware running 
>> at full speed. What about those cases?
> For ASICs, the general solution to this is to use an emulator 
> or FPGA board to test it.  In most cases, your lab board is as 
> good as an emulator.  The only case where emulators would be 
> better is if they collect coverage and report it back in a 
> form that can be integrated with your other tests.

In that case we have a pile of docs (verification reports, 
compliance matrices, coverage results, ...) which feels the gap 
(and the day).

Al

Article: 156371
Subject: Re: full functional coverage
From: Jim Lewis <usevhdl@gmail.com>
Date: Wed, 19 Mar 2014 17:03:16 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hi Al,
> 1. Following the example on the paper with the MemIO, the CpuIF has 
> 'two' interfaces, one towards the CPU, and another one toward the inner 
> part of the entity. If we do not hook up the internal interface somehow 
> we are not verifying the full unit. While in the example the internal 
> interface might be very simple (like a list of registers), it might not 
> be always the case.

It is a problem.  If the interface is simple enough, you test it when 
integrating the next block.  If the interface is complex enough, it could
writing  behavioral model to test it.

> 2. As subblocks are available, together with testcases through the 
> system level testbench, a complex configuration system has to be 
> maintained in order to instantiate only what is needed. This overhead 
> might be trivial if we have 4 subblocks, but it may pose several 
> problems when the amount of them increases drastically.
The alternative of course is to instantiate them all and deal with the
run time penalty of having extra models present that are not being used.
Depending on your system, this may be ok.

The other alternative is to develop separate testbenches for each of the 
different sets of blocks being tested - I suspect the configurations will
always be easier than this.  

You are right though as too many configurable items results in a 
proliferation of configurations that I call a configuration expolsion. :)

> I do not quite understand the reason to split testcases in separate 
> architectures, I use to envelop the TbMemIO in what I call 'harness' and 
> instantiate it in each of my testcases. The harness grows as BFMs become 
> available and it never breaks an earlier testcase since the newly 
> inserted BFM was not used in earlier testcases.
This separation is important for reuse at different level of testing.  
Separate that test case from that models that implement the exact interface 
behavior.  The models can be implemented with either a procedure in a package 
(for simple behaviors) or an entity and architecture (for more complex models -
this is what the paper I referenced shows).  

Lets say you are testing a UART.  Most UART tests can be done both at the
core-level and system-level.  If you have the test cases separated in this
manner, then by using different behavior models in the system, you can 
use the same test cases to test both.

> > If you would like this in a class room setting, make sure to catch our 
> > VHDL TLM + OSVVM World Tour (our class VHDL Testbenches and 
> > Verification).  Next stop is in Sweden on May 5-9 and Germany in July 
> > 14-18.  http://www.synthworks.com/public_vhdl_courses.htm
> 
> Unfortunately cost and time are not yet available. I'd love to follow 
> that course but as of now I need to rely on my own trials and 
> guidance like yours ;-)
Do you work for free?  If not, I suspect the cost of you learning by 
reading, trial and error, and  searching the internet when you run 
into issues is going to cost much more than a class.  You seem to be
progressing ok though.  

Jim

Article: 156372
Subject: Re: full functional coverage
From: Jim Lewis <usevhdl@gmail.com>
Date: Wed, 19 Mar 2014 17:19:58 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hi Al,
> I've found a third option which might be quite interesting 
> especially when dealing with standard buses:
> 
> klabs.org/richcontent/software_content/vhdl/force_errors.pdf ??? 
> 
> The interesting thing about this technique is how you can 
> randomly generate errors on a bus without the need to model them. 
> You do need to model the coverage though.

This is ok at a testbench level, but if you wanted to use it 
inside your system, you will need to check and see if your synthesis
tool is tolerant of user defined resolution functions.  

You will want to make sure that the error injector (on the
generation side) communicates with the checker (on the receiver
side) so that the checker knows it is expecting a particular type
of error.  In addition, you will want your test case generator to
be able to initiate errors in a non-random fashion.

For my testbenches, I like each BFM to be able to generate any
type of error that the DUT can gracefully handle.  Then I set
up the transactions so they can communicate this information.
To randomly generate stimulus and inject errors, I use the
OSVVM randomization methods to do this at the test case generation
level (TestCtrl).    

Cheers,
Jim

P.S. 
Like @Tom Gardner hinted at, I avoid the word "UNIT" as it
means different things to different organizations.  I have seen
authors use UNIT to mean anywhere from Design Unit (Entity/Architecture) 
to a Core to a Subsystem (a box of boards that plugs into an airplane).  
As a result, I use Core/Core Tests as most people seem to relate that 
to being a reusable piece of intellectual property - like a UART.   

Article: 156373
Subject: Re: full functional coverage
From: Tom Gardner <spamjunk@blueyonder.co.uk>
Date: Thu, 20 Mar 2014 01:07:11 +0000
Links: << >>  << T >>  << A >>
On 20/03/14 00:19, Jim Lewis wrote:
> Like @Tom Gardner hinted at, I avoid the word "UNIT" as it
> means different things to different organizations.  I have seen
> authors use UNIT to mean anywhere from Design Unit (Entity/Architecture)
> to a Core to a Subsystem (a box of boards that plugs into an airplane).
> As a result, I use Core/Core Tests as most people seem to relate that
> to being a reusable piece of intellectual property - like a UART.

IMNSHO, the "unit" is the thing to which the stimulus
is applied and the response measured. Hence a "unit"
can be a capacitor, opamp, bandpass filter, register,
adder, xor gate, ALU, CPU, whatever is hidden inside
an FPGA, PCBs containing one or more of the above, crate
of PCBs, a single statement a=b+c, a statement invoking
library functions anArrayOfPeople.sortByName() aMessage.email()

What too many people don't understand is that in many
of those cases they aren't "simple unit tests", rather
they are integration tests - even if it is only testing
that your single statement works with the library functions.

Another point, which is too often missed, is to ask "what
am I trying to prove with this test". Classically in hardware
you need one type of test to prove the design is correct,
and an /entirely different/ set of tests to prove that
each and every manufactured item has been manufactured
correctly.

But I'm sure you are fully aware of all that!


Article: 156374
Subject: Xilinx ISERDESE2 deserializer primitive behaviour
From: Carl <carwer0@gmail.com>
Date: Thu, 20 Mar 2014 01:41:55 -0700 (PDT)
Links: << >>  << T >>  << A >>
Hi,
=20
I'm having some problems to understand the exact behavior of the ISERDESE2 =
primitive. What I need to understand is exactly how the unit will distribut=
e the serial input to the bits in the output (paralell) words, or in other =
words, how ISERDESE aligns the frames on the incoming serial data stream in=
 order to deliver the paralell words.
=20
Here is a screenshot showing the signals on some of the ports of a cascaded=
 (width expansion) ISERDES-pair:
http://forums.xilinx.com/t5/image/serverpage/image-id/11986i790019220C7960F=
0/

This example is actually from a simulation of the example design that is ge=
nerated by Coregen (SelectIO Interface Wizard). Width is 14 bits, DDR mode.=
 The VHDL file can be downloaded here:
http://forums.xilinx.com/xlnx/attachments/xlnx/7Series/3904/2/selectio_if_w=
iz_v4_1.zip
=20
The signal iserdes_q_vec is a vector [slave q8..q3] & [master q8..q1].
=20
From the input (ddly) and output (iserdes_q_vec) I can clearly see how the =
frame is aligned - I have marked the frame by the two cursors. It is clear =
that the ddly input within this frame (10000111100100) is what appears on i=
serdes_q_vec on the next rising edge of clkdiv.
=20
The reason for this particular alignment is however unknown to me. I've loo=
ked in the User Guide (ug471) but can't find info on this.
=20
I tried to de-assert rst in various ways but couldn't really make anything =
out of the results (in this design, rst is de-asserted synchronously with c=
lkdiv, as suggested by the user guide).
=20
In my case, I have no training pattern and hance can't find the right align=
ment with bitslip operations. In my case, for the serial data, clkdiv is eq=
ual to the frame clock. From the simulation I can of course determine how t=
he frame is placed in the serial data, and fix the paralell words in custom=
 logic, but then I need to understand why I get this particular offset, and=
 be convinced that I will always get exactly this particular offset.
=20
I would intuitively have expected clkdiv to act as a frame clock as well, b=
ut nothing in the UG suggests that, and according to the simulation, that i=
s also clearly not the case.
=20
Device: xc7k160t-2
=20
Thanks in advance,
Carl



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search