Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search

Messages from 38225

Article: 38225
Subject: Triscend ARM+FPGA chips Experience
From: meng.engineering@bluewin.ch (Markus Meng)
Date: 9 Jan 2002 07:01:39 -0800
Links: << >>  << T >>  << A >>
Hi all,

I'am looking for information on some experience using the Triscend
solution including ARM7+SDRAM Controller + System Glue and the FPGA
core...

To what can you compare the Triscend Logic Block in the FPGA?
Has somebody allready used those chips successfully in a real
application?
Is there a plan for a new familiy inlcuding an MMU on chip?

thank's

markus

Article: 38226
Subject: Where can I download Maxlock`s PCI core (It`s freecore, but I can`t download this from their WWW). Can anyone sent me this by email ?
From: "Victor Levandovsky" <vic@elsyst.km.ua>
Date: Wed, 9 Jan 2002 17:02:53 +0200
Links: << >>  << T >>  << A >>
1



Article: 38227
Subject: FPGA and CCD : any experience?
From: Gacquer William <wgacquer@yahoo.fr>
Date: Wed, 09 Jan 2002 16:06:40 +0100
Links: << >>  << T >>  << A >>
Hello
	has anybody tried to connect a FPGA to several CCDs ( for imaging 
purpose, of course ? )
	I am new to FPGA programming.
	Regards,
	William Gacquer


Article: 38228
Subject: Re: distributed ram bits in XCVxxxx series
From: Peter Alfke <palfke@earthlink.net>
Date: Wed, 09 Jan 2002 15:40:59 GMT
Links: << >>  << T >>  << A >>
All Xilinx FPGAs since the XC4000 days can use the 16-bit Look-up table, normally used as logic, alternatively as RAM. So, there are 16 bits of RAM in every LUT.

Peter Alfke, Xilinx Applications
========================
Matthias Weber wrote:

> hi,
>
> i have read in the fgpa spezifications (preliminary product spezification v2.2) about distributed ram bits (e.g 614,400 within XCV2000E)
> is this kind of ram for the fpgas configuration or for user specific usage  (= 2 flipflops/latches in each logic cell or are they extra bits? if latter, are extra bits chosen
> for reducing singal prpagation delays?
>
> thanks,
>
> matthias


Article: 38229
Subject: Re: ADPCM?
From: Austin Lesea <austin.lesea@xilinx.com>
Date: Wed, 09 Jan 2002 07:51:11 -0800
Links: << >>  << T >>  << A >>
Anthony,

The basic algorithms themselves are detailed in the standards.  The ITU-T G
series for Europe (and the rest of the world), or the ATIS "T1 Committee"
standards for the US.

Check out the IP cores that Xilinx offers for easy guaranteed compliance with
these standards.

Austin

Anthony Ellis wrote:

> Hi,
>
> We are interested in implementing ADPCM in an FPGA. Any idea where one can
> get the algorithms (tested) for this compression technique?
> Anthony


Article: 38230
Subject: Re: Repost: Should clock skew be included for setup time analysis?
From: Kevin Brace <nospamtomekevinbraceusenet@nospamtomehotmail.com>
Date: Wed, 09 Jan 2002 09:55:15 -0600
Links: << >>  << T >>  << A >>
Bob,

I guess my initial posting missed several points.
The thing I was concerned was the setup time (Tsu) for signals coming
from outside of the chip.
Here is part of a timing report that should clarify my point.


================================================================================
Timing constraint: COMP "irdy_n" OFFSET = IN 7 nS  BEFORE COMP "clk" ;

 567 items analyzed, 68 timing errors detected.
 Minimum allowable offset is   9.445ns.
--------------------------------------------------------------------------------
Slack:                  -2.445ns (requirement - (data path - clock path
- clock arrival))
  Source:               irdy_n
  Destination:          PCI_IP_Core_Instance_ad_Port_21
  Destination Clock:    clk_BUFGP rising at 0.000ns
  Requirement:          7.000ns
  Data Path Delay:      11.833ns (Levels of Logic = 5)
  Clock Path Delay:     2.388ns (Levels of Logic = 2)
  Timing Improvement Wizard
  Data Path: irdy_n to PCI_IP_Core_Instance_ad_Port_21
    Delay type         Delay(ns)  Logical Resource(s)
    ----------------------------  -------------------
    Tiopi                 1.224   irdy_n
                                  irdy_n_IBUF
    net (fanout=47)       1.974   irdy_n_IBUF
    Tilo                  0.653   PCI_IP_Core_Instance_I_XXL_1384
    net (fanout=32)       1.564   PCI_IP_Core_Instance_N2952
    Tilo                  0.653   PCI_IP_Core_Instance_I_411_LUT_40
    net (fanout=1)        1.648   PCI_IP_Core_Instance_N4755
    Tilo                  0.653   PCI_IP_Core_Instance_I__n0024_21
    net (fanout=1)        2.256   PCI_IP_Core_Instance_N4759
    Tioock                1.208   PCI_IP_Core_Instance_ad_Port_21
    ----------------------------  ------------------------------
    Total                11.833ns (4.391ns logic, 7.442ns route)
                                  (37.1% logic, 62.9% route)

  Clock Path: clk to PCI_IP_Core_Instance_ad_Port_21
    Delay type         Delay(ns)  Logical Resource(s)
    ----------------------------  -------------------
    Tgpio                 1.082   clk
                                  clk_BUFGP/IBUFG
    net (fanout=1)        0.007   clk_BUFGP/IBUFG
    Tgio                  0.773   clk_BUFGP/BUFG
    net (fanout=468)      0.526   clk_BUFGP
    ----------------------------  ------------------------------
    Total                 2.388ns (1.855ns logic, 0.533ns route)
                                  (77.7% logic, 22.3% route)

--------------------------------------------------------------------------------




--------------------------------------------------------------------------------
4 constraints not met.


Data Sheet report:
-----------------
All values displayed in nanoseconds (ns)

Setup/Hold to clock clk
---------------+------------+------------+
               |  Setup to  |  Hold to   |
Source Pad     | clk (edge) | clk (edge) |
---------------+------------+------------+
ad<0>          |    5.516(R)|    0.000(R)|
ad<10>         |    5.691(R)|    0.000(R)|
ad<11>         |    4.457(R)|    0.000(R)|
ad<12>         |    4.478(R)|    0.000(R)|
ad<13>         |    3.422(R)|    0.000(R)|
ad<14>         |    3.916(R)|    0.000(R)|
ad<15>         |    4.468(R)|    0.000(R)|
ad<16>         |    3.490(R)|    0.000(R)|
ad<17>         |    3.402(R)|    0.000(R)|
ad<18>         |    3.470(R)|    0.000(R)|
ad<19>         |    3.821(R)|    0.000(R)|
ad<1>          |    4.943(R)|    0.000(R)|
ad<20>         |    3.848(R)|    0.000(R)|
ad<21>         |    4.427(R)|    0.000(R)|
ad<22>         |    4.728(R)|    0.000(R)|
ad<23>         |    3.456(R)|    0.000(R)|
ad<24>         |    2.986(R)|    0.000(R)|
ad<25>         |    3.444(R)|    0.000(R)|
ad<26>         |    3.260(R)|    0.000(R)|
ad<27>         |    3.517(R)|    0.000(R)|
ad<28>         |    4.303(R)|    0.000(R)|
ad<29>         |    4.606(R)|    0.000(R)|
ad<2>          |    5.781(R)|    0.000(R)|
ad<30>         |    3.844(R)|    0.000(R)|
ad<31>         |    3.945(R)|    0.000(R)|
ad<3>          |    5.734(R)|    0.000(R)|
ad<4>          |    4.404(R)|    0.000(R)|
ad<5>          |    5.967(R)|    0.000(R)|
ad<6>          |    5.709(R)|    0.000(R)|
ad<7>          |    4.880(R)|    0.000(R)|
ad<8>          |    4.615(R)|    0.000(R)|
ad<9>          |    3.910(R)|    0.000(R)|
c_be_n<0>      |    6.539(R)|    0.000(R)|
c_be_n<1>      |    6.116(R)|    0.000(R)|
c_be_n<2>      |    5.946(R)|    0.000(R)|
c_be_n<3>      |    7.348(R)|    0.000(R)|
frame_n        |    8.493(R)|    0.000(R)|
idsel          |    0.884(R)|    0.000(R)|
irdy_n         |    9.445(R)|    0.000(R)|
par            |    7.649(R)|    0.000(R)|
---------------+------------+------------+

Clock clk to Pad
---------------+------------+
               | clk (edge) |
Destination Pad|   to PAD   |
---------------+------------+
ad<0>          |    9.840(R)|
ad<10>         |    9.731(R)|
ad<11>         |    9.731(R)|
ad<12>         |    9.731(R)|
ad<13>         |    9.733(R)|
ad<14>         |    9.733(R)|
ad<15>         |    9.733(R)|
ad<16>         |    9.734(R)|
ad<17>         |    9.733(R)|
ad<18>         |    9.733(R)|
ad<19>         |    9.733(R)|
ad<1>          |    9.840(R)|
ad<20>         |    9.734(R)|
ad<21>         |    9.733(R)|
ad<22>         |    9.733(R)|
ad<23>         |    9.733(R)|
ad<24>         |    9.731(R)|
ad<25>         |    9.731(R)|
ad<26>         |    9.734(R)|
ad<27>         |    9.734(R)|
ad<28>         |    9.774(R)|
ad<29>         |    9.779(R)|
ad<2>          |    9.786(R)|
ad<30>         |    9.782(R)|
ad<31>         |    9.784(R)|
ad<3>          |    9.786(R)|
ad<4>          |    9.784(R)|
ad<5>          |    9.782(R)|
ad<6>          |    9.779(R)|
ad<7>          |    9.734(R)|
ad<8>          |    9.734(R)|
ad<9>          |    9.731(R)|
devsel_n       |    9.731(R)|
par            |    9.733(R)|
perr_n         |    9.733(R)|
serr_n         |    9.733(R)|
stop_n         |    9.731(R)|
trdy_n         |    9.734(R)|
---------------+------------+
--------------------------------------------------------------------------------


        In this example, IRDY# (irdy_n) comes from outside of the chip,
goes through 3 levels of LUTs, and goes into an IOB output FF.
Some people may ask me why I don't register the input, but in my case, I
am already using a registered version of IRDY# when that is possible
like during a configuration cycle, a single transfer, or the first
transfer of a burst transfer where wait cycles are being inserted.
If wait cycles are being inserted for several cycles, the registered
version and the raw (unregistered) version signal are guaranteed to be
same by the PCI specification (unless a parity error occurs) because
once a signal is asserted, the specification doesn't allow it to change
until end of a transfer (or end of a microaccess in burst transfer).
During a no-wait cycle burst transfer, a PCI device has to see the raw
signal to determine what to do next, and from looking at the inputs
going into the above LUTs through Floorplanner, the state machine state
where no-wait burst transfer is handled is the signal path shown above.
Note that the routing delay shown in above report looks pretty bad, but
I should be able to reduce that when I floorplan the design.
        If I restate my concern, the concern I have is that if a chip
that can pass testing for a Spartan-II speed grade -6 is sold as a speed
grade -5 because of yield improvement, the actual clock skew for that
chip will be reduced by about 20% (from 2.388ns (-5) to about 1.90ns
(-6)).
Yes, I guess the LUT delay will likely drop by at least 20% (when I
resynthesized my design for Spartan-II speed grade -6, the logic and
routing delay seemed to have dropped faster than the clock skew), so the
end result will be that the timings will still be met . . . I guess.
Another concern is at low temperatures, because at lower temperatures,
clock skew should decrease.
I guess logic and routing delay will also decrease, so the end result
will be that the timings will still be met . . . I guess.
        I must say that the views I have here are pretty pessimistic.
I assume that the clock skew will decrease at lower temperatures, but
the LUT, DFF setup time, and routing delay will not decrease at all,
which is not true.
Should I redo the timing report analysis at lower temperature, and
believe those numbers?
What I am trying to do to some extent is that I want the data path delay
(logic + routing) to be less than 7ns, so that I don't have to worry
about clock skew being there to aid the setup time.
I realize that this becomes harder and harder to achieve as the die size
grows because the bigger chip will have more routing delay (For example,
Virtex XCV1000 will likely have much more routing delay than Virtex
XCV150 or Spartan-II XC2S150).
But my understanding which might be wrong is that the clock skew number
shown in the report is for the maximum possible clock skew.
Than because clock skew can be less than shown in the report, what is
the minimum clock skew I should expect?




Thanks,



Kevin Brace (don't respond to me directly, respond within the newsgroup)




Bob Perlman wrote:
> 
> Hi -
> 
> I assume you're talking about setup time margins within the chip,
> i.e., both the source and destination flip-flops are on-chip.
> 
> The question boils down to how well the data delay tracks the clock
> skew delay over temperature, voltage, and process.  Xilinx used to
> claim (and maybe still does) that gate delays track to 70%, i.e., if a
> gate is at its maximum delay, no other gate on the die will have a
> delay of less than 70% of its maximum.  Someone at Xilinx can correct
> me, but I believe that the actual tracking of delays on a die is far
> better than that, and that the 70% tracking number was chosen because
> there was absolutely no chance of it ever being violated.
> 
> I tend to obsess over timing, but have to admit that I don't worry too
> much about Xilinx subtracting the clock skew from setup time.  In
> essense, they're guaranteeing that the resulting setup time is
> correct, and that you can design to it.
> 
> If you want something to worry about, wonder whether the speed files
> are correct.  The nice thing is, this concern is vendor-independent
> and will never go away no matter how much you worry, something that
> can be appreciated by those of us seeking constancy in an
> ever-changing, chaotic world.
> 
> Bob Perlman
> Cambrian Design Works
> 
> On Tue, 08 Jan 2002 14:12:06 -0600, Kevin Brace
> <nospamtomekevinbraceusenet@nospamtomehotmail.com> wrote:
> 
> >I made this posting around Chrismas, and that seems to be the reason no
> >one responded, so I am reposting the same posting again.
> >
> >
> >
> >
> >_________________________________________________________________________
> >
> >Hi, I will like to know how the readers of this newsgroup think of
> >including clock skew for setup time analysis?
> >I am working on a PCI IP core which with various suggestions from the
> >readers of this newsgroup, I was able to improve setup timings (Tsu)
> >through reduction of logic levels (reduction of levels of LUTs).
> >I am using ISE WebPack 4.1 and targeting Spartan-II 150K system gate
> >part for my PCI IP core.
> >In ISE WebPack when I ran TRCE to generate a timing error report, the
> >timing report for setup time includes clock skew occurring, and this
> >clock skew time subtracts some time off the data path delay (data path
> >delay = gate delay + routing delay) which becomes total or final delay,
> >and the worst time here is shown in the timing summary section.
> >However, if I think carefully about the timing data shown in the report,
> >the temperature assumed here is 85 degrees celsius, and since
> >semiconductor devices have less delays in a lower temperature, at room
> >temperature (20 degrees Celsius) the clock skew will likely be much less
> >than what the report suggests, and even lower at a freezing temperature
> >(0 degrees Celsius, the lowest temperature commercial package version of
> >Spartan-II is guaranteed to function).
> >Yes, I do realize that at a temperature lower than 85 degrees Celsius,
> >the gate delays for LUTs and FFs will also decrease, therefore even if
> >the clock skew decreases that shouldn't cause a major problem, however,
> >no one really knows which one will decrease faster.
> >        Another problem I can think is that in the case of Xilinx
> >devices, several Xilinx employees have written publicly in this
> >newsgroup (I know those are their own opinions, and not necessarily the
> >company's official position on the issues being raised) that whether or
> >not it is a different speed grade, all the chips come from the same
> >silicon wafer.
> >That will mean that in the case of Virtex, speed grade -4, -5, or -6
> >devices come from the same silicon wafer.
> >I knew nothing about FPGAs two years ago, but from what I hear, Xilinx
> >first came out with Virtex speed grade -4 in 1998, and later got speed
> >grade -5 and -6 out (I don't know the exact release date of those two
> >speed grades. I will be interested to hear when they started to ship).
> >Likely most chips manufactured back in 1998 ran only at speed grade -4,
> >but as Xilinx improved the speed of Virtex through circuit and
> >manufacturing improvements, it was able to pick chips that will run at
> >speed grade -5 or -6.
> >However, there are customers who designed products in the days of Virtex
> >speed grade -4, so Xilinx still has to supply Virtex speed grade -4 to
> >the market.
> >The concern I have here is that even though the chip is marked as a
> >Virtex speed grade -4, isn't it possible that chip could have been
> >marked as a speed grade -6 device because it was manufactured recently?
> >(let's say in 2001)
> >If so, won't the clock skew assumption made during the setup time
> >analysis be off for such Virtex speed grade -4 device, perhaps by 1ns to
> >2ns depending on the device size?
> >I am not criticizing Xilinx for bin splitting devices, but I think it
> >seems risky to use maximum clock skew during setup time analysis.
> >Are there any ways to disable using maximum clock skew from being used
> >in MAP/PAR/TRCE/TimingAn?
> >
> >
> >
> >Thanks,
> >
> >
> >
> >Kevin Brace (don't respond to me directly, respond within the newsgroup)
> 
> --
> Cambrian Design Works
> digital design, signal integrity
> http://www.cambriandesign.com
> e-mail: respond to bob at the domain above.

Article: 38231
Subject: Re: FPGA and CCD : any experience?
From: Jonathan Bromley <Jonathan.Bromley@doulos.com>
Date: Wed, 9 Jan 2002 16:10:29 +0000
Links: << >>  << T >>  << A >>
In article <3C3C5C80.5080702@yahoo.fr>, Gacquer William
<wgacquer@yahoo.fr> writes
>Hello
>       has anybody tried to connect a FPGA to several CCDs ( for imaging 
>purpose, of course ? )

Linear or area (2-D) CCD?  Colour or monochrome?  Interline 
transfer or frame transfer?  Electronic shutter or not?
Single-phase or polyphase clocks?  Single or multiple video
outputs?  Do you need quasi-random readout?

If all the CCDs are the same type, you can generate common timing 
signals for all of them;  but you will still need separate
drivers (level shifters) for each device, because of the very heavy
capacitive load presented by the CCD clock pins.  And you will
need separate signals to control electronic shutter on each device,
unless all CCDs are operating in the same optical environment.

Similarly the timing signals for your A/D converters can perhaps
be common for all CCDs, although this will not work if you are
aiming to use a single A/D converter to support all your 
CCDs.

Linear CCDs are fairly easy to drive.  The clock signals are
not very complicated.  But area CCDs require complicated 
clock sequences (especially colour devices) and it is usually
much easier to use the dedicated drive LSI chips provided by
the CCD vendor.  Sometimes the vendor does not provide full
details of the required clocks, and therefore you are forced
to use their proprietary control chips.
-- 
Jonathan Bromley
DOULOS Ltd.
Church Hatch, 22 Market Place, Ringwood, Hampshire BH24 1AW, United Kingdom
Tel: +44 1425 471223                     Email: jonathan.bromley@doulos.com
Fax: +44 1425 471573                             Web: http://www.doulos.com

                   **********************************
                   **  Developing design know-how  **
                   **********************************

This e-mail and any  attachments are  confidential and Doulos Ltd. reserves
all rights of privilege in  respect thereof. It is intended for the  use of
the addressee only. If you are not the intended  recipient please delete it
from  your  system, any  use, disclosure, or copying  of this  document  is
unauthorised. The contents of this message may contain personal views which
are not the views of Doulos Ltd., unless specifically stated.




Article: 38232
Subject: Re: Interpreting Xilinx Timing Analyser report files
From: Brian Philofsky <brian.philofsky@xilinx.com>
Date: Wed, 09 Jan 2002 09:12:23 -0700
Links: << >>  << T >>  << A >>
This is a multi-part message in MIME format.
--------------C3638F3F6EF1FBD4BF266090
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit



If you are targeting Virtex-E, there is a pretty cool page on the Xilinx
web site explaining each of these data path types at
http://support.xilinx.com/applications/web_ds/  It is called the
Interactive Data Sheet and can be set up for your exact target device and
I/O standards.  My understanding is a Virtex-II and Spartan-IIE version is
in the works but I do not know when it will be released.  Otherwise, the
timing section of the data sheet contains a brief description of each path
however is not as visual as the interactive datasheet.


--  Brian



Richard Padovan wrote:

> Where can I find a list explaining all the delay types detailed in the
> Timing Analyser report file ? Sure most are self-explanatory but some
> not: eg Tf_fgm.



Article: 38233
Subject: Re: bufg instantiation in ISE 4.1
From: Andy Peters <andy@exponentmedia.nospam.com>
Date: Wed, 09 Jan 2002 16:17:54 GMT
Links: << >>  << T >>  << A >>
k. wrote:

> I am instantiating clock buffers in ISE 4.1 using XST with the
> following
> code -
> 
> 	component bufg
> 	port (
> 		i : in std_logic;
> 		o : out std_logic
> 	);
> 	end component;
> 
> 	ua : bufg
> 	port map (
> 		i => clk,
> 		o => clk_bufg
> 	);
> 
> this will implement ok. But when I try to load the design into
> ModelSim I get
> the following error -
> 
> # WARNING[1]: master.vhd(345): No default binding for component:
> "bufg". (No entity named "bufg" was found)
> 
> What am I missing? I've gone through the Xilinx and ModelSim
> documentation, but I can't find anything relevent. Any help would be
> appreciated.


Why are you instantiating these components?  The tool will infer them 
for you.

--a




Article: 38234
Subject: Re: comp.arch.fpga : Problem with modelsim and ISE4.1
From: Brian Philofsky <brian.philofsky@xilinx.com>
Date: Wed, 09 Jan 2002 09:24:08 -0700
Links: << >>  << T >>  << A >>
This is a multi-part message in MIME format.
--------------1FDBD6B7FEAA4CC2F2049E73
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit


Hua Wang wrote:

> Help!
> Every time I launched the modelsim in ISE4.1 there was error occured. Messages displayed in the modelsim window are as follows;
> Warning: Could not open log file vsim.wlf, using C;\document settings\...\temp\vismw10 instead.
>
> Problem with simulator....vsim U/I closing(2)
>
> Problem with simluator...vism U/I closing(1)
>
> Then the modelsim was terminated.
> I have followed the instructions on Xilinx site to compile the libs of Xilinx into Modlesim.
> Could anybody please help me solve this problem.

I have seen this issue before however I have not seen it close a session before.  Usually it issues the warning that it can not
open the file (generally becuase it is locked by another program) and writes the file in your Temp directory rather than the
project directory.  Simulation then continues without closing the simulator.

Be sure you are not opening more than one ModelSim session at a time.  This looks like it could be you issue since the message you
included shows ModelSim closing twice.  I believe that is generally the cause of this problem.  Try closing all Modelsim sessions
on the machine.  Then go into the project directory and manually delete the vsim.wlf file.  This file is a waveform file saved by
ModelSim during the simulation.  Generally is not of much use to you after you have completed your simulation.  Then try running
the simulation again however only have one session of ModelSim open at a time.  It is never a good idea to have two simulations
occuring in the same project directory using the same work library as well as other files (like the vsim.wlf file).  Hopefully that
will get you going.  If not, I suggest contacting Xilinx Support and see if they can figure out the problem.


--  Brian



Article: 38235
Subject: Re: How can I relate Virtex2 pin names and Slice XY loc?
From: "Falk Brunner" <Falk.Brunner@gmx.de>
Date: Wed, 9 Jan 2002 17:46:24 +0100
Links: << >>  << T >>  << A >>
"axilon" <axilon@attbi.com> schrieb im Newsbeitrag
news:3C3BE93D.8050002@attbi.com...
> How can I relate the package pin names and Slice XY locations in Xilinx
> Virtex2 device?  I need it to put LOC constraint to IOB.
> I looked into the pinout_text_files at DataSource CD-ROM but its
> Slice X/Y Location data doesn't match with Physical names appeared at
> FPGA editor screen.  For an example, in 2v40cs144.txt file, X15Y14
> location has 6 pads (PAD21-PAD26) - that doesn't make sense since
> each Slice location can have max 4 pads.  Am I just discovered
> documentation error?

No. Dont mix up Slices with IO Cells. If you want to move some signals to
specific IO-Pins, just use a LOC constraint like

NET mysignalname LOC ="P100";

Watch out, the Signal names dependes on the package, BGAs are different from
TQFPs.

If you want an IO FlipFlop to be placed inside an IOB, just use

NET mysignalname IOB=true;

--
MfG
Falk




Article: 38236
Subject: Re: how do i program a Spartan FPGA
From: "Falk Brunner" <Falk.Brunner@gmx.de>
Date: Wed, 9 Jan 2002 17:52:53 +0100
Links: << >>  << T >>  << A >>
<swan> schrieb im Newsbeitrag news:ee740b2.2@WebX.sUN8CHnE...
> isnt it high time that xilinx introduce a better solution for doing that?

There are more complete solutions from Xilinx, but I havent used them yet.
The "classic" configuration sequence via serial/parallel mode is detailed
explained in the datasheets.

> i really wonder if xapp 58 has really helped anyone

Yes, we used it in a project. Successfull ;-)
And are using it right now in another project. Hopefully successfull ;-)

--
MfG
Falk





Article: 38237
Subject: Re: please tell me how to solve xilinx error xml
From: Dennis McCrohan <mccrohan@xilinx.com>
Date: Wed, 09 Jan 2002 09:01:35 -0800
Links: << >>  << T >>  << A >>
dotty1319 wrote:

> ngdbuild -p xc9572-7-pc84 -uc re.ucf -dd ..
> c:\xilinx\active\projects\re\re.edf re.ngd
> Release 3.3.08i - ngdbuild D.27
> Copyright (c) 1995-2000 Xilinx, Inc.  All rights reserved.
>
> Command Line: ngdbuild -p xc9572-7-pc84 -uc re.ucf -dd ..
> c:\xilinx\active\projects\re\re.edf re.ngd
>
> Launcher: Executing edif2ngd "c:\xilinx\active\projects\re\re.edf"
> "C:\Xilinx\active\projects\re\xproj\ver1\re.ngo"
> Release 3.3.08i - edif2ngd D.27
> Copyright (c) 1995-2000 Xilinx, Inc.  All rights reserved.
> Writing the design to
> "C:/Xilinx/active/projects/re/xproj/ver1/re.ngo"...
> Reading NGO file "C:/Xilinx/active/projects/re/xproj/ver1/re.ngo" ...
> Reading component libraries for design expansion...
>
> Annotating constraints to design from file "re.ucf" ...
>
> Checking timing specifications ...
>
> Checking expanded design ...
> FATAL_ERROR:StaticFileParsers:Xml_Node.c:358:1.12.8.2 - Corrupt or
> missing Xml
>    conversion files. Process will terminate.  To resolve this error,
> please
>    consult the Answers Database and other online resources at
>    http://support.xilinx.com

You should ask this question directly of our support hotline, or go to
the URL listed above.

That said, my hunch would be that you have an installation problem. This
is definitely NOT a user error, i.e., there is nothing in your design
causing it. It's an issue with not having all the necessary files for the
application installed in the correct location. In this case, I would
hazard the guess that the files that are missing are those in (or that
should be in) your XILINX/data/xml directory. I just checked my iSE 4.1
install, and I've got 171 files in there with a .cnv file extension.

-Dennis McCrohan, Xilinx CPLD S/W
[Speaking for myself, and not for Xilinx....]




Article: 38238
Subject: Re: function generators of Xilinx XCVxxxxE series
From: John_H <johnhandwork@mail.com>
Date: Wed, 09 Jan 2002 17:12:46 GMT
Links: << >>  << T >>  << A >>
The Virtex series devices have mux selects (F5 and F6 muxes) that allow additional control.  The BX inputs from adjacent slices control the F5 muxes in
each slice - selecting the output of two CLBs in a slice - and the BY input controls the F6 mux in the slice with the output, selecting the two F5 mux
results.

Two BXs and one BY pad out to 19.



Matthias Weber wrote:

> hi,
>
> the architectural description of xilinxs fpga explains, that 4 logic cells of a CLB can be combined for executing selected functions up to 19 inputs.
> before it is explained that the function generator of each logic cells works with 4 inputs. thus there are all in all 16 inputs at each CLB.
> where are the last 19 - 16 = 3 inputs are comming from? are the carry inputs used?
>
> thanks,
>
> matthias


Article: 38239
Subject: Re: How can I relate Virtex2 pin names and Slice XY loc?
From: "Kevin Neilson" <kevin_neilson@removethis-yahoo.com>
Date: Wed, 09 Jan 2002 17:32:14 GMT
Links: << >>  << T >>  << A >>
In my opinion, Xilinx messed up when creating their coordinate systems.
They could have easily created a system in which the BRAM, slice, and IOB
coordinates were all related, but they refused to do so.  The result makes
the creation of placement scripts extremely difficult.  For example, if you
wish to make a script that places registers next to a BRAM, it is very
difficult, because there is no relation between the BRAM at x,y and the
coords of the slice that surround it.  In fact, this is even different for
every part.  The same is true for the IOBs that sit next to slices.  A
better coordinate system would have helped a lot in the development of cores
which can be placed anywhere in any part.  Instead, if you have a core that
requires IOBs close to core slices, you can't include the IOB relative
locations in the core and have to hand-place for each situation.  That's not
how a core is supposed to work.

"axilon" <axilon@attbi.com> wrote in message
news:3C3BE93D.8050002@attbi.com...
> How can I relate the package pin names and Slice XY locations in Xilinx
> Virtex2 device?  I need it to put LOC constraint to IOB.
> I looked into the pinout_text_files at DataSource CD-ROM but its
> Slice X/Y Location data doesn't match with Physical names appeared at
> FPGA editor screen.  For an example, in 2v40cs144.txt file, X15Y14
> location has 6 pads (PAD21-PAD26) - that doesn't make sense since
> each Slice location can have max 4 pads.  Am I just discovered
> documentation error?
> Can any Xilinx folks out there answer my question?
> TIA
> Ax
>



Article: 38240
Subject: Re: please tell me how to solve xilinx error xml
From: Rick Filipkiewicz <rick@algor.co.uk>
Date: Wed, 09 Jan 2002 17:51:59 +0000
Links: << >>  << T >>  << A >>


dotty1319 wrote:

> ngdbuild -p xc9572-7-pc84 -uc re.ucf -dd ..
> c:\xilinx\active\projects\re\re.edf re.ngd
> Release 3.3.08i - ngdbuild D.27
> Copyright (c) 1995-2000 Xilinx, Inc.  All rights reserved.
>
> Command Line: ngdbuild -p xc9572-7-pc84 -uc re.ucf -dd ..
> c:\xilinx\active\projects\re\re.edf re.ngd
>
> Launcher: Executing edif2ngd "c:\xilinx\active\projects\re\re.edf"
> "C:\Xilinx\active\projects\re\xproj\ver1\re.ngo"
> Release 3.3.08i - edif2ngd D.27
> Copyright (c) 1995-2000 Xilinx, Inc.  All rights reserved.
> Writing the design to
> "C:/Xilinx/active/projects/re/xproj/ver1/re.ngo"...
> Reading NGO file "C:/Xilinx/active/projects/re/xproj/ver1/re.ngo" ...
> Reading component libraries for design expansion...
>
> Annotating constraints to design from file "re.ucf" ...
>
> Checking timing specifications ...
>
> Checking expanded design ...
> FATAL_ERROR:StaticFileParsers:Xml_Node.c:358:1.12.8.2 - Corrupt or
> missing Xml
>    conversion files. Process will terminate.  To resolve this error,
> please
>    consult the Answers Database and other online resources at
>    http://support.xilinx.com

I think there was another thread on this topic some months ago. Try
looking at the archives:

http://www.fpga-faq.com

generously hosted by Phillip Freidin.


Article: 38241
Subject: Spartan IIE pinout compatibililty with Virtex E
From: rickman <spamgoeshere4@yahoo.com>
Date: Wed, 09 Jan 2002 12:57:26 -0500
Links: << >>  << T >>  << A >>
To the best of my knowledge, the Virtex chips are nearly pin identical
with the Spartan II family. The only difference is the temperature sense
diode was intended to be replaced with a power down function, but was
removed from the final design and is now a pair of No Connects. 

As I look at the pinouts for the Spartan IIE family, I don't see the
same level of pin compatiblity with the Virtex E family. Am I not
reading the docs correctly, or are the pinouts different between these
two families? 

I thought I had a solution to my problems with the power supply surge
current of the Spartan II chips. I only need a (relatively) small number
of 5 volt tolerant IOs in this design (~90). So I could split the FPGAs
into an XC2S using an LDO from the 3.3 supply and a XC2Se running off
the DSP core supply at 1.8 volts. By using different power voltages, the
separate on board regulators could provide the high surge current each
part needed and keep the total parts cost to a reasonable level for the
standard board. But I am also looking for the option of using a higher
density part in the XC2Se socket to be able to provide a lot of FPGA
density for customers who want to roll their own FPGA designs. But it
looks like the XC2Se is not pin compatible with the XCVe parts... Am I
back to the drawing board??? 


-- 

Rick "rickman" Collins

rick.collins@XYarius.com
Ignore the reply address. To email me use the above address with the XY
removed.

Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design      URL http://www.arius.com
4 King Ave                               301-682-7772 Voice
Frederick, MD 21701-3110                 301-682-7666 FAX

Article: 38242
Subject: Re: comp.arch.fpga : Problem with modelsim and ISE4.1
From: Rick Filipkiewicz <rick@algor.co.uk>
Date: Wed, 09 Jan 2002 17:59:50 +0000
Links: << >>  << T >>  << A >>


Brian Philofsky wrote:

> Hua Wang wrote:
>
> > Help!
> > Every time I launched the modelsim in ISE4.1 there was error occured. Messages displayed in the modelsim window are as follows;
> > Warning: Could not open log file vsim.wlf, using C;\document settings\...\temp\vismw10 instead.
> >
> > Problem with simulator....vsim U/I closing(2)
> >
> > Problem with simluator...vism U/I closing(1)
> >
> > Then the modelsim was terminated.
> > I have followed the instructions on Xilinx site to compile the libs of Xilinx into Modlesim.
> > Could anybody please help me solve this problem.
>
> I have seen this issue before however I have not seen it close a session before.  Usually it issues the warning that it can not
> open the file (generally becuase it is locked by another program) and writes the file in your Temp directory rather than the
> project directory.  Simulation then continues without closing the simulator.
>
> Be sure you are not opening more than one ModelSim session at a time.  This looks like it could be you issue since the message you
> included shows ModelSim closing twice.  I believe that is generally the cause of this problem.  Try closing all Modelsim sessions
> on the machine.  Then go into the project directory and manually delete the vsim.wlf file.  This file is a waveform file saved by
> ModelSim during the simulation.  Generally is not of much use to you after you have completed your simulation.  Then try running
> the simulation again however only have one session of ModelSim open at a time.  It is never a good idea to have two simulations
> occuring in the same project directory using the same work library as well as other files (like the vsim.wlf file).  Hopefully that
> will get you going.  If not, I suggest contacting Xilinx Support and see if they can figure out the problem.
>
> --  Brian

There's the other option of specifying a .wlf on the vsim command line. This is useful since it avoids c:/temp getting cluutered up
with 100s of waveform files, one for each restart/run.



Article: 38243
Subject: Re: Spartan IIE pinout compatibililty with Virtex E
From: Rick Filipkiewicz <rick@algor.co.uk>
Date: Wed, 09 Jan 2002 19:54:32 +0000
Links: << >>  << T >>  << A >>


rickman wrote:

> To the best of my knowledge, the Virtex chips are nearly pin identical
> with the Spartan II family. The only difference is the temperature sense
> diode was intended to be replaced with a power down function, but was
> removed from the final design and is now a pair of No Connects.
>
> As I look at the pinouts for the Spartan IIE family, I don't see the
> same level of pin compatiblity with the Virtex E family. Am I not
> reading the docs correctly, or are the pinouts different between these
> two families?
>
> I thought I had a solution to my problems with the power supply surge
> current of the Spartan II chips. I only need a (relatively) small number
> of 5 volt tolerant IOs in this design (~90). So I could split the FPGAs
> into an XC2S using an LDO from the 3.3 supply and a XC2Se running off
> the DSP core supply at 1.8 volts. By using different power voltages, the
> separate on board regulators could provide the high surge current each
> part needed and keep the total parts cost to a reasonable level for the
> standard board. But I am also looking for the option of using a higher
> density part in the XC2Se socket to be able to provide a lot of FPGA
> density for customers who want to roll their own FPGA designs. But it
> looks like the XC2Se is not pin compatible with the XCVe parts... Am I
> back to the drawing board???
>

I was just about to investigate the same idea. Looks like we've been bitten
by the marketing dept.



Article: 38244
Subject: Re: Spartan IIE pinout compatibililty with Virtex E
From: rickman <spamgoeshere4@yahoo.com>
Date: Wed, 09 Jan 2002 15:11:14 -0500
Links: << >>  << T >>  << A >>
Rick Filipkiewicz wrote:
> 
> rickman wrote:
> 
> > To the best of my knowledge, the Virtex chips are nearly pin identical
> > with the Spartan II family. The only difference is the temperature sense
> > diode was intended to be replaced with a power down function, but was
> > removed from the final design and is now a pair of No Connects.
> >
> > As I look at the pinouts for the Spartan IIE family, I don't see the
> > same level of pin compatiblity with the Virtex E family. Am I not
> > reading the docs correctly, or are the pinouts different between these
> > two families?
> >
> > I thought I had a solution to my problems with the power supply surge
> > current of the Spartan II chips. I only need a (relatively) small number
> > of 5 volt tolerant IOs in this design (~90). So I could split the FPGAs
> > into an XC2S using an LDO from the 3.3 supply and a XC2Se running off
> > the DSP core supply at 1.8 volts. By using different power voltages, the
> > separate on board regulators could provide the high surge current each
> > part needed and keep the total parts cost to a reasonable level for the
> > standard board. But I am also looking for the option of using a higher
> > density part in the XC2Se socket to be able to provide a lot of FPGA
> > density for customers who want to roll their own FPGA designs. But it
> > looks like the XC2Se is not pin compatible with the XCVe parts... Am I
> > back to the drawing board???
> >
> 
> I was just about to investigate the same idea. Looks like we've been bitten
> by the marketing dept.

Actually, I would bet this is the engineering department. I did not do a
thorough comparison to see exactly what has changed (very, very tedious)
but I did notice that some ground pins are have moved to power pins. It
looks like they use thee power pins for each bank of IO pins on the
XC2Se rather than the two power pins on the XCVe. I would guess that
they had some trouble with power decoupling or other dv/dt effects from
the power rather than the typical ground bounce effects that they had
been so worried about. 

So rather than to stick with the devil that you know, they decided to
add a new one :)

It would have been nice to be able to offer one board with the dual
build approach. 


-- 

Rick "rickman" Collins

rick.collins@XYarius.com
Ignore the reply address. To email me use the above address with the XY
removed.

Arius - A Signal Processing Solutions Company
Specializing in DSP and FPGA design      URL http://www.arius.com
4 King Ave                               301-682-7772 Voice
Frederick, MD 21701-3110                 301-682-7666 FAX

Article: 38245
Subject: Re: FPGA and CCD : any experience?
From: Gacquer William <wgacquer@yahoo.fr>
Date: Wed, 09 Jan 2002 22:42:44 +0100
Links: << >>  << T >>  << A >>


Jonathan Bromley wrote:

> In article <3C3C5C80.5080702@yahoo.fr>, Gacquer William
> <wgacquer@yahoo.fr> writes
> 
>>Hello
>>      has anybody tried to connect a FPGA to several CCDs ( for imaging 
>>purpose, of course ? )
>>
> 
> Linear or area (2-D) CCD?

  Colour or monochrome?  Interline 
> transfer or frame transfer?  Electronic shutter or not?
> Single-phase or polyphase clocks?  Single or multiple video
> outputs?  Do you need quasi-random readout?
> 
> If all the CCDs are the same type, you can generate common timing 
> signals for all of them;  but you will still need separate
> drivers (level shifters) for each device, because of the very heavy
> capacitive load presented by the CCD clock pins.  And you will
> need separate signals to control electronic shutter on each device,
> unless all CCDs are operating in the same optical environment.
> 
> Similarly the timing signals for your A/D converters can perhaps
> be common for all CCDs, although this will not work if you are
> aiming to use a single A/D converter to support all your 
> CCDs.
> 
> Linear CCDs are fairly easy to drive.  The clock signals are
> not very complicated.  But area CCDs require complicated 
> clock sequences (especially colour devices) and it is usually
> much easier to use the dedicated drive LSI chips provided by
> the CCD vendor.  Sometimes the vendor does not provide full
> details of the required clocks, and therefore you are forced
> to use their proprietary control chips.
> 


Thanks for your answer.

It's for video purpose.
The CCDs are :
- area
- colour
- interline transfer
- electronic shutter

I do not fully understand your questions about the clocks, the number of 
video outputs and the quasi-random readout (what is it ?)
Is it feasible to synchronize CCD readouts ? (what is the uncertainty ?)

Which CCD vendor would you recommend for 60 FPS digital video at a 
resolution of 800x600 (for instance) in outdoor conditions? I am not 
looking for the CCD itself but also for the companion chips as you 
mentionned.

Best regards

William Gacquer
		




Article: 38246
Subject: Where to buy Altera APEX20K with reasonable price?
From: mthan@divio.com (Han, MT)
Date: 9 Jan 2002 14:32:38 -0800
Links: << >>  << T >>  << A >>
We need to buy 8 ALtera APEX20K chips with resonable
price. Please email me (mthan@divio.com) information.
THank you for your help.

Article: 38247
Subject: Error -10010 during Digital Buffer Control
From: arast@inficom.com (Alex Rast)
Date: Wed, 09 Jan 2002 23:08:53 GMT
Links: << >>  << T >>  << A >>
Following up on my difficulties in doing dynamic digital writes, I created a 
VI that seemed to be along the right lines. I do a DIO Config and a Digital 
Clock config outside my data-generation While loop. Inside the loop, I have a 
Digital Single Write followed by a Digital Buffer Control with the start 
argument to start the transfer.

But no matter what value I pass to the Digital Buffer Control for the number 
of updates, it errors out with a -10010 error. I tried putting the Digital 
Buffer Control before the Single Write, and after, neither of which made any 
difference. I tried setting it to the value I set in the DIO Config. I tried 
0, 1, 2, and 4. In desperation I even tried a looping iteration through 
incremental values of the number of updates, hoping to hit a valid value by 
brute force, none of which worked.

I also thought about the possibility that maybe Digital Single Write doesn't 
require Digital Buffer Control. But when I did that, the DIO-32HS would never 
write the data bits at all. Instead, writes would stack up for each successive 
iteration, until the loop died at iteration 16 (presumably because the 
DIO-32HS' internal FIFO overflowed).

So what do I need to do to get Digital Single Write to actually write the 
words?

Alex Rast
arast@inficom.com
arast@qwest.net

Article: 38248
Subject: Re: How can I relate Virtex2 pin names and Slice XY loc?
From: Bret Wade <bret.wade@xilinx.com>
Date: Wed, 09 Jan 2002 18:13:23 -0700
Links: << >>  << T >>  << A >>
Hi Kevin,

The 4.1i release (SP2 needed) contains a new feature that you may find useful.
It supports a new grid system called the RPM Grid. It's a combined grid system
that allows you to create heterogeneous relocatable RPMs. With the standard
grid, if you created an RPM with BRAMs , Slices and IOBs, the relative locations
between the different component types would shift as the macro was moved. This
doesn't occur with the RPM Grid.

To use the new grid system, create the RPM  as usual, but using the alternative
coordinate system. The RPM needs to also contain the attribute "RPM_GRID=GRID"
to identify the coordinate system. This attribute can be placed on any symbol in
the macro. The coordinate system can be viewed in FPGA Editor. If you select a
Site in FED, note that an RPM_GRID coordinate is printed in the history window.

Sorry, there's no documentation yet. I'm working on an appnote.

Regards,
Bret Wade
Xilinx Product Applications

Kevin Neilson wrote:

> In my opinion, Xilinx messed up when creating their coordinate systems.
> They could have easily created a system in which the BRAM, slice, and IOB
> coordinates were all related, but they refused to do so.  The result makes
> the creation of placement scripts extremely difficult.  For example, if you
> wish to make a script that places registers next to a BRAM, it is very
> difficult, because there is no relation between the BRAM at x,y and the
> coords of the slice that surround it.  In fact, this is even different for
> every part.  The same is true for the IOBs that sit next to slices.  A
> better coordinate system would have helped a lot in the development of cores
> which can be placed anywhere in any part.  Instead, if you have a core that
> requires IOBs close to core slices, you can't include the IOB relative
> locations in the core and have to hand-place for each situation.  That's not
> how a core is supposed to work.
>
> "axilon" <axilon@attbi.com> wrote in message
> news:3C3BE93D.8050002@attbi.com...
> > How can I relate the package pin names and Slice XY locations in Xilinx
> > Virtex2 device?  I need it to put LOC constraint to IOB.
> > I looked into the pinout_text_files at DataSource CD-ROM but its
> > Slice X/Y Location data doesn't match with Physical names appeared at
> > FPGA editor screen.  For an example, in 2v40cs144.txt file, X15Y14
> > location has 6 pads (PAD21-PAD26) - that doesn't make sense since
> > each Slice location can have max 4 pads.  Am I just discovered
> > documentation error?
> > Can any Xilinx folks out there answer my question?
> > TIA
> > Ax
> >


Article: 38249
Subject: Re: Where can I download Maxlock`s PCI core (It`s freecore, but I can`t download this from their WWW). Can anyone sent me this by email ?
From: Philip Cummins <philip@no-spam.cs.uwa.edu.au>
Date: Thu, 10 Jan 2002 09:26:24 +0800
Links: << >>  << T >>  << A >>
[[ This message was both posted and mailed: see
   the "To," "Cc," and "Newsgroups" headers for details. ]]

Hello,

Maxlock's PCI cores are not free, however by filling in the request
forms you can obtain an evaluation copy for assessment. (However if you
wanted to use it in any projects you'd have to license it off them).

PC



Site Home   Archive Home   FAQ Home   How to search the Archive   How to Navigate the Archive   
Compare FPGA features and resources   

Threads starting:
1994JulAugSepOctNovDec1994
1995JanFebMarAprMayJunJulAugSepOctNovDec1995
1996JanFebMarAprMayJunJulAugSepOctNovDec1996
1997JanFebMarAprMayJunJulAugSepOctNovDec1997
1998JanFebMarAprMayJunJulAugSepOctNovDec1998
1999JanFebMarAprMayJunJulAugSepOctNovDec1999
2000JanFebMarAprMayJunJulAugSepOctNovDec2000
2001JanFebMarAprMayJunJulAugSepOctNovDec2001
2002JanFebMarAprMayJunJulAugSepOctNovDec2002
2003JanFebMarAprMayJunJulAugSepOctNovDec2003
2004JanFebMarAprMayJunJulAugSepOctNovDec2004
2005JanFebMarAprMayJunJulAugSepOctNovDec2005
2006JanFebMarAprMayJunJulAugSepOctNovDec2006
2007JanFebMarAprMayJunJulAugSepOctNovDec2007
2008JanFebMarAprMayJunJulAugSepOctNovDec2008
2009JanFebMarAprMayJunJulAugSepOctNovDec2009
2010JanFebMarAprMayJunJulAugSepOctNovDec2010
2011JanFebMarAprMayJunJulAugSepOctNovDec2011
2012JanFebMarAprMayJunJulAugSepOctNovDec2012
2013JanFebMarAprMayJunJulAugSepOctNovDec2013
2014JanFebMarAprMayJunJulAugSepOctNovDec2014
2015JanFebMarAprMayJunJulAugSepOctNovDec2015
2016JanFebMarAprMayJunJulAugSepOctNovDec2016
2017JanFebMarAprMayJunJulAugSepOctNovDec2017
2018JanFebMarAprMayJunJulAugSepOctNovDec2018
2019JanFebMarAprMayJunJulAugSepOctNovDec2019
2020JanFebMarAprMay2020

Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

Custom Search