Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hi Thanks for your answer Well I am a real newbie, so I have a ask again what do you mean exactly. So I need a state machine to manage this simple operation? First of all I have to set the data_in_valid <= 1 Then I am able to perform this operation Data_Out <= ("0000000000000000") & (Data_in xor ("0000000000000001")); Finally I have to set Data_Out_Valid <=1 and then I am able to read out the data. Is this correct? Is it perhaps possible that you provide me a simple example code for this task? I would be very thankful cheers R "Goran Bilski" <goran@xilinx.com> wrote in message news:cjjt8j$ii81@cliff.xsj.xilinx.com... > You have to handle the other signals in the protocol like the > data_in_valid,... > You also have to set the Data_Out_Valid when you actually have valid data. > > Göran > > Roger Planger wrote: > >>Hello >> >>I have added manually the Peripherial described in the following document >>to my system. >>And there were really output results, so it worked quite good. >> >>http://direct.xilinx.com/bvdocs/appnotes/xapp529.pdf >> >>The next step was, that I altered the entity iDCT, so that it only >>performs a simple xor operation.This was the only change! >>But unfortunately the output value is always 0, so can somebody perhaps >>tell me what went wrong in this example? >>The IP should be added correctly to the Microblaze processor, is it >>perhaps possible that I have to set the Data_Out_Valid signal <= 1 before >>I am able to read from the output port? >> >>Here is my entity declaration and the softwarepart for writing and >>reading! >> >>entity iDCT is >>port ( >>Clk : in std_logic; >> >>Reset : in std_logic; >> >>Data_In : in std_logic_vector(15 downto 0); >> >>Data_In_Valid : in std_logic; >> >>Read_Data_In : out std_logic; >> >>Data_Out_Full : in std_logic; >> >>Data_Out : out std_logic_vector(31 downto 0); >> >>Data_Out_Valid : out std_logic); >> >>end entity iDCT; >> >>architecture IMP of iDCT is >> >>begin >> >>Data_Out <= ("0000000000000000") & (Data_in xor ("0000000000000001")); >> >>end architecture IMP; >> >> k=2222; >> microblaze_nbwrite_datafsl(k,0); >> k=0; >> microblaze_nbread_datafsl(k,0); >> xil_printf("X...OR :%d -- ",k); >> >>Thanks for your help! >>R >> >> >> >> >> >> >Article: 73951
After posting this it occurred to me that this isn't comp.arch.xilinx, and I should point out we're usign a Virtex-II Pro (xc2vp20), on an AFX FF1152 board. We don't see this problem under simulation. -- Michael Dales University of Cambridge Computer Laboratory http://www.cl.cam.ac.uk/~mwd24/Article: 73952
You should download EDK 6.3i where you can do this automatically in the tools. The tools will insert a dummy function where you can change one line for this type of function you want to do. Göran Roger Planger wrote:Article: 73953
Sounds good, but download is available from October 4, so can perhaps you Goran or somedody else please quickly tell me the steps I have to do to solve this problem? Would be really great Cheers "Goran Bilski" <goran@xilinx.com> wrote in message news:cjjv4r$i7n1@cliff.xsj.xilinx.com... You should download EDK 6.3i where you can do this automatically in the tools. The tools will insert a dummy function where you can change one line for this type of function you want to do. Göran Roger Planger wrote: Hi Thanks for your answer Well I am a real newbie, so I have a ask again what do you mean exactly. So I need a state machine to manage this simple operation? First of all I have to set the data_in_valid <= 1 Then I am able to perform this operation Data_Out <= ("0000000000000000") & (Data_in xor ("0000000000000001")); Finally I have to set Data_Out_Valid <=1 and then I am able to read out the data. Is this correct? Is it perhaps possible that you provide me a simple example code for this task? I would be very thankful cheers R "Goran Bilski" <goran@xilinx.com> wrote in message news:cjjt8j$ii81@cliff.xsj.xilinx.com... You have to handle the other signals in the protocol like the data_in_valid,... You also have to set the Data_Out_Valid when you actually have valid data. Göran Roger Planger wrote: Hello I have added manually the Peripherial described in the following document to my system. And there were really output results, so it worked quite good. http://direct.xilinx.com/bvdocs/appnotes/xapp529.pdf The next step was, that I altered the entity iDCT, so that it only performs a simple xor operation.This was the only change! But unfortunately the output value is always 0, so can somebody perhaps tell me what went wrong in this example? The IP should be added correctly to the Microblaze processor, is it perhaps possible that I have to set the Data_Out_Valid signal <= 1 before I am able to read from the output port? Here is my entity declaration and the softwarepart for writing and reading! entity iDCT is port ( Clk : in std_logic; Reset : in std_logic; Data_In : in std_logic_vector(15 downto 0); Data_In_Valid : in std_logic; Read_Data_In : out std_logic; Data_Out_Full : in std_logic; Data_Out : out std_logic_vector(31 downto 0); Data_Out_Valid : out std_logic); end entity iDCT; architecture IMP of iDCT is begin Data_Out <= ("0000000000000000") & (Data_in xor ("0000000000000001")); end architecture IMP; k=2222; microblaze_nbwrite_datafsl(k,0); k=0; microblaze_nbread_datafsl(k,0); xil_printf("X...OR :%d -- ",k); Thanks for your help! RArticle: 73954
rickman <spamgoeshere4@yahoo.com> wrote in message news:<41583A6C.827D2CFF@yahoo.com>... > Isn't this a rather clumsy piece of code? Isn't there a way to use a > few simple lines to infer a block ram (that is not written) and then > init the contents separately? It is not often that I want to hard code > my ROM contents. > > > I was adhering to the customer's code sample when formulating the original reply reply. To achieve what you want you can do the following: The easiest way to make a ROM that can be initialized separately is to instantiate an Altera megafunction. You can use the MegaWizard Plug-In Manager (Tools menu) to configure the block and look at a specific initialization file and then you can change the contents of the file later on. The LPM_ROM is a part of the LPM standard and and shoud be supported by most FPGA CAD tools. - Subroto Datta Altera Corp.Article: 73955
In article <29f7d.3570$JG2.795@newssvr14.news.prodigy.com>, superfpga <superfpga@pacbell.net> wrote: >Paul- >you make some good points in your discussions especially about the $1 extra >cost compared to a $100 FPGA as well as highlighting the significance of >saving that same $1 if it was in a Cyclone low cost device. > >However, how would you change your mind if some of the newer developing NV >memory technologies were similar in size to flash cell/dram cell (<< area of >SRAM), also used standard leading edge CMOS process (unlike Flash or DRAM), >thus much more cost effective bulk memory without adding any process premium >to the rest of the chip? Though maybe not the holy grail, don't you think >it provides a good amount of value as cited by some of Hals, Jim's and >Nicholas' posts? There are probably even other ideas out there for value. a) I doubt it would come without a process penalty in the .13 or 90nm node (or the equivelent at a given point in time). If it could, that might be another story. b) If it could, it would need to be many times rewriteable to be acceptable to the FPGA companies. One of the real advantages of the FPGAs over other technologies (antifuse, etc) is the unlimited rewrite capability. THus a write once or limited write memory would be far less useful. c) My point is that the security aspect of embedded NVram is significantly less when compared to using the existing SRAM-based bitfile encryption to protect a large external nonvolatile store. So I'm a naysayer on NVram. -- Nicholas C. Weaver. to reply email to "nweaver" at the domain icsi.berkeley.eduArticle: 73956
"Antti Lukats" <antti@case2000.com> writes: > are you sure that MGT should be capable to correctly receive first packet > after regaining lock (after loss of signal) - as soon as there ar 75 non > transition data bits the lock is lost, and locking again takes time, so you > need to supply some valid stream for lock-in before the first packet I assume you're asking if we give it long enough to lock, and I think the answer here is yes. If I leave it running for several seconds nothign much changes, and the comma symbol is being constantly sent. There's plenty of transitions, as I can see this from attaching a scope to the output. I'm not sure it's particularly a clock sync problem, more an alignment problem, as I constantly see the 16 bit symbol we send, just some times it'll be out of alignment by 8 bits. We send: -- comma sequence: send K28.5 (BC) and D16.2 (50) constant C_comma_data : std_logic_vector (15 downto 0) := X"BC50"; constant C_comma_charisk : std_logic_vector (1 downto 0) := B"10"; And after a reset, I see either a constant stream of 0xBC50 on the rxdata or 0x50BC. In the case where I see 0x50BC it remains like that indefinitely until I send a packet. The packet is ignored by our design (as the start of packet symbol will be unaligned, and thus not read properly), but after the packet has gone through the channel is aligned properly, and rxdata reads 0xBC50 between packets, and all packets get across fine. -- Michael Dales University of Cambridge Computer Laboratory http://www.cl.cam.ac.uk/~mwd24/Article: 73957
Hi, I expected that you can initialize the burst lenght after SDRAM startup. I just wonder what happens in case you have a nios-II cache miss. However I will have to spend some time simulating the whole bunch ... Configuration of SDRAM burst length is a normal step in using SDRAM's, isn't it? Best Regards Markus zohargolan@hotmail.com (zg) wrote in message news:<e24ecb44.0409301615.7b8bccaf@posting.google.com>... > Hi Ken, > > Thank you for your response. > What frequncy are you running the NIOS? I tried simulating at 72MHz > and I got only 2 words bursts. > Are you using NIOS II? > > Regards, > Zohar > > "Kenneth Land" <kland_not_this@neuralog_not_this.com> wrote in message news:<10lg0aca67jseaa@news.supernews.com>... > > "zg" <zohargolan@hotmail.com> wrote in message > > news:e24ecb44.0409241455.340ba130@posting.google.com... > > > Hi All, > > > > > > I am trying to use the SDR SDRAM controller that is comming with the > > > NIOS II development package. In the simulation it looks like this core > > > supports only 2 words bursts. I couldn't find anything in the > > > documentations. > > > Am I correct? > > > If this core supports bigger bursts then 2 words, any ideas what am I > > > doing wrong? > > > > > > Thank you all > > > Zohar > > > > Zohar, > > > > I'm working on a design that uses one 16MB sdram chip for most of its > > instruction and data memory. I rely heavily on dma and other burst reads > > and writes (ie cache) to get the performance I need. > > > > The sdram controller you're talking about is good enough to burst 480 32 bit > > words in under 485 cpu clocks. It's a beautiful thing to watch in Signal > > Tap as I get this performance. (Haven't explored the lenght limits above > > 480 - my external fifo's AlmostFull level) > > > > I'm hammering that sdram (through the Altera sdram controller) nines ways to > > Sunday and it performs flawlessly. > > > > I have some serious issues with the whole NiosI/II chain, but the sdram > > controller has been a champ. > > > > KenArticle: 73958
General Schvantzkoph wrote: (snip, someone wrote) >>>Cost of a mask set for different technologies: >>>250 nm: $ 100 k >>>180 nm : $ 300 k >>>130 nm: $ 800 k >>> 90 nm: $1200 k >>> 65 nm: $2000 k (snip) > Those figures are for pure ASICs, what are the costs for structured ASICs? I don't know about structured ASICs, but before there were FPGA's there were ordinary gate arrays. As a cheaper way to build an ASIC, companies would make arrays of transistors such that only the metalization layers needed to be added to build an ASIC. Maybe one or two metal layers. Gate arrays don't allow the variability of transistor size that other technologies allow, but the cost savings makes a big difference. I believe that early SPARC, among others, were built using gate array technology. -- glenArticle: 73959
Ben_Koh wrote: > The following code i have written kept given me an error of > "Design contain an unbreakable combination cycle", i have tried using > delays but it doesn't help. You need a latch to break a combinatorial cycle. Consider the verilog statement assign n=n+1; Unlike in C, (Fortran, PL/I, Algol, BASIC, Pascal, ...) where the new value replaces the old value only when the statement is executed, this is a continuous assignment statement. As soon as the new value appears in n it is immediatly used again in the expression n+1. As hardware, it connects the output of a combinatorial adder to its input. It might be that it cycles through the allowed values as fast as the adder allows. The verilog code always @(posedge clk) n=n+1; will assign to n on the rising edge of clock the value n+1 had just before the clock edge (at least Tsu before). As hardware, it is a latch on the output of an adder, with the latch output connected to the adder input. -- glenArticle: 73960
"David" <david.lamb@gmail.com> schreef in bericht news:4b5ddf5.0410010758.cf16a3@posting.google.com... > Hi, > I'm getting an HDL ADVISOR message when I synthesize this code. > Basically, it is a big shift register that shift-in data on each > rising edge of mclk if srclkRise is high, and outputs its data on the > rising edge of mclk if lrclkRise is high. I don't quite understand the > advice here. Can anybody help? > Thanks, > David > > --ADVICE > INFO:Xst:741 - HDL ADVISOR - A 9-bit shift register was found for > signal <datain<8>> and currently occupies 9 logic cells (5 slices). > Removing the set/reset logic would take advantage of SRL16 (and > derived) primitives and reduce this to 1 logic cells (1 slices). > Evaluate if the set/reset can be removed for this simple shift > register. The majority of simple pipeline structures do not need to be > set/reset operationally. > > > --CODE > always @ (posedge mclk or negedge resetb) > begin > if (~resetb) begin > datain <= 0; > leftdata <= 0; > rightdata <= 0; > end > else begin > if (sclkRise == 1) begin > datain[63:1] <= datain[62:0]; > datain[0] <= sdata; > end > if (lrclkRise == 1) begin > leftdata <= datain[63:40]; > rightdata <= datain[31:8]; > end > end > end I think it's the resetting code at the beginning that causes this. Resetting all the registers is set/reset functionality. I don't know Verilog that well, but it seems this is a synchronous reset; that means that for each register a 2:1 mux is needed, one input to load a zero, and the other input to load the data from the neighbouring register. Converting this to an asychronous reset would remove all those muxes, and utilize the dedicated preset/reset input on the flipflop. JeroenArticle: 73961
Austin Lesea wrote: > We can get an arbitrarily high coverage by just increasing our > patterns (99.9%+) for 0 added silicon cost. ASICs can not do that. Do FPGAs typically have significant 'hidden' test structures on board? Did they in the past? I'm pretty much Xilinx in my DNA, but as I recall Altera used to advertise a spare-column arangement to improve yield, as in DRAM. I guess this would be more or less impossible with ASICs, and maybe impossible with modern FPGAs.Article: 73962
The SRL16 does not have a reset pin on it...ie, you can't clear its contents in one clock cycle. Your code has a reset in it that clears the whole shift register, which means it cannot be implemented using an SRL16 primitive (which costs only one LUT+FF instead of 9). If you can eliminate the reset on this shift register, then you get a more compact implementation. Your code would look like this: always @ (posedge mclk or negedge resetb) begin if (sclkRise == 1) begin datain[63:1] <= datain[62:0]; datain[0] <= sdata; end if (lrclkRise == 1) begin leftdata <= datain[63:40]; rightdata <= datain[31:8]; end end David wrote: > Hi, > I'm getting an HDL ADVISOR message when I synthesize this code. > Basically, it is a big shift register that shift-in data on each > rising edge of mclk if srclkRise is high, and outputs its data on the > rising edge of mclk if lrclkRise is high. I don't quite understand the > advice here. Can anybody help? > Thanks, > David > > --ADVICE > INFO:Xst:741 - HDL ADVISOR - A 9-bit shift register was found for > signal <datain<8>> and currently occupies 9 logic cells (5 slices). > Removing the set/reset logic would take advantage of SRL16 (and > derived) primitives and reduce this to 1 logic cells (1 slices). > Evaluate if the set/reset can be removed for this simple shift > register. The majority of simple pipeline structures do not need to be > set/reset operationally. > > --CODE > always @ (posedge mclk or negedge resetb) > begin > if (~resetb) begin > datain <= 0; > leftdata <= 0; > rightdata <= 0; > end > else begin > if (sclkRise == 1) begin > datain[63:1] <= datain[62:0]; > datain[0] <= sdata; > end > if (lrclkRise == 1) begin > leftdata <= datain[63:40]; > rightdata <= datain[31:8]; > end > end > end -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 73963
Nicholas - Regarding point d - I agree depending on your cost threshold for security and purpose. S/W code flexibility? Cost of Battery+NV chip = 20% premium on $10 12K Lcell low cost device? I think decision will vary situation by situation. For points a and b - regarding rewriteable flexibility - despite the value - I have been told by many FPGA Vendor employees that something like 50% of FPGA users do not provide means for in-field configuration data upgrades. For these customers I thought that on-chip OTP NV data would provide extra storage utility since they do not have the external NV chip. Thoughts? "Nicholas Weaver" <nweaver@soda.csua.berkeley.edu> wrote in message news:cjk0pe$fv6$1@agate.berkeley.edu... > In article <29f7d.3570$JG2.795@newssvr14.news.prodigy.com>, > superfpga <superfpga@pacbell.net> wrote: > >Paul- > >you make some good points in your discussions especially about the $1 extra > >cost compared to a $100 FPGA as well as highlighting the significance of > >saving that same $1 if it was in a Cyclone low cost device. > > > >However, how would you change your mind if some of the newer developing NV > >memory technologies were similar in size to flash cell/dram cell (<< area of > >SRAM), also used standard leading edge CMOS process (unlike Flash or DRAM), > >thus much more cost effective bulk memory without adding any process premium > >to the rest of the chip? Though maybe not the holy grail, don't you think > >it provides a good amount of value as cited by some of Hals, Jim's and > >Nicholas' posts? There are probably even other ideas out there for value. > > a) I doubt it would come without a process penalty in the .13 or 90nm > node (or the equivelent at a given point in time). If it could, that > might be another story. > > b) If it could, it would need to be many times rewriteable to be > acceptable to the FPGA companies. One of the real advantages of the > FPGAs over other technologies (antifuse, etc) is the unlimited rewrite > capability. THus a write once or limited write memory would be far > less useful. > > c) My point is that the security aspect of embedded NVram is > significantly less when compared to using the existing SRAM-based > bitfile encryption to protect a large external nonvolatile store. > So I'm a naysayer on NVram. > > -- > Nicholas C. Weaver. to reply email to "nweaver" at the domain > icsi.berkeley.eduArticle: 73964
Generally speaking (I'll talk about the exceptions in a second), the generic gets the init value passed to the simulator but not to the hardware, and the init attribute passes it to the hardware. So assuming that is the case, you need the init attribute on the primitive in order to pass the initialization to the edif netlist (and thus on to the bit file). The attributes are ignored by simulation, so you need to set the primitive up with a generic in order to initialize the simulation so that it matches the hardware. About a year ago, some synthesis tools started parsing certain generics like the init generic to automatically pass the generic value to the hardware (essentially by automatically adding an init=attribute), so if you have one of those synthesis tools, you technically do not need to include an init attribute. However, if you want your code portable between tools that do this and tools that don't, then you need to make so the tool ignores the generic when it synthesizes the design. The translate_on/off pragmas are a switch that cause the synthesis to skip over those lines (it you don't then you end up with two inits in the edif which causes problems in the translate in the xilinx tool chain). The code you have here has the generic, but it is ignored in synthesis so it never gets passed to the bitstream regardless of whether the tool can do it or not. You are missing the matching INIT attribute, which I see you have in one of the follow up posts. Another thing, you should consider using ieee.numeric_std instead of std_logic_arith and std_logic_unsigned. It is an ieee standard and contains definitions for both signed and unsigned types. STD_LOGIC_UNSIGNED, STD_LOGIC_SIGNED, and STD_LOGIC_ARITH are not standard meaning you can get different behavior out of different vendors libraries. Also the signed and unsigned libraries have conflicting definitions which presents problems if you have a design that uses both. Brad Smallridge wrote: > OK. What is wrong with this code? I am expecting to initiate the SRL16 with > some sort of pattern, then loop it around continuously in a 10 bit pattern, > put it to a pad where I can see it with a scope. I get a one little blip > but not much. > > library IEEE; > use IEEE.STD_LOGIC_1164.ALL; > use IEEE.STD_LOGIC_ARITH.ALL; > use IEEE.STD_LOGIC_UNSIGNED.ALL; > > library UNISIM; > use UNISIM.VComponents.all; > > entity srltest is > port( > clk : in std_ulogic; > q : out std_ulogic ); > end srltest; > > architecture Behavioral of srltest is > > component SRL16 > -- synthesis translate_off > generic ( > INIT: bit_value:= X"1001"); > -- synthesis translate_on > port (Q : out STD_ULOGIC; > > A0 : in STD_ULOGIC; > A1 : in STD_ULOGIC; > A2 : in STD_ULOGIC; > A3 : in STD_ULOGIC; > CLK : in STD_ULOGIC; > D : in STD_ULOGIC); > end component; > -- Component Attribute specification for SRL16 > -- should be placed after architecture declaration but > -- before the begin keyword > -- Enter attributes in this section > -- Component Instantiation for SRL16 should be placed > -- in architecture after the begin keyword > > signal feedback : std_ulogic; > > begin > > SRL16_INSTANCE_NAME : SRL16 > -- synthesis translate_off > generic map( > INIT => X"7878" ) > -- synthesis translate_on > port map (Q => feedback , > A0 => '0', > A1 => '1', > A2 => '0', > A3 => '1', > CLK => clk, > D => feedback ); > > q <= feedback; > > end Behavioral; -- --Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email ray@andraka.com http://www.andraka.com "They that give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety." -Benjamin Franklin, 1759Article: 73965
Hello Nirav The problem in this case is that System Generator requires the Core Generator tool to be installed. The Webpack version of ISE does not come with Core Generator. Also, the latest versions of System Generator requires either Matlab R13.1 or R14. Our website should have all the information you need to determine the required software that you need. Here are the specific links: http://www.xilinx.com/products/software/sysgen/sw_req.htm http://www.xilinx.com/products/software/sysgen/compatibility_matrix.htm There is also evaluation versions of all the software so you can still check it out. Regards, Elliot Nirav Shah wrote: > Friends, > I was trying to install Xilinx System Generator v 6.2 (evaluation) on my > computer. > I have Xilinx webpack 6.2i SP 3 with IP Update 1.1 for it & Matlab 6.5 (R13) > But while installing it says "Can not find Code GENERATOR" installation. > i checked environmet variable, XILINX = c:\Xilinx & that's where my > installation is. > Any Suggetions? > Thanks > NiravArticle: 73966
Tim, We do have hidden structures, and you might say they are here for test, but they are often there for other reasons. Most commonly the designers want to see something, so we make it accessible (ie it is connect able, but we do not tell customers how to do it, and the software does not support it). Spare columns are something else entirely, if a column is bad, then by a laser fuse (done at test time on the tester) the column that is bad can be replaced by a column that is good. Altera pioneered this usage in their products, and have many patents on it. As far as it goes, it adds slightly to the area, and provides higher yields (depending on the process yield -- see below). The issue with it is that the timing models have to account for the worst case repair, and variability in timing may result if you do not have enough slack in your design (ie you cut it too close). Now granted, any marginal design can cause grief, so the claim of variable behavior is really just FUD. The claim that the worst numbers have to be the numbers is real. But is also not useful, as who cares? ASICs also use column replacement (memory arrays) as they also want to have better yields. In fact most memory products have some form of redundancy. The EM poly fuse was specifically developed for that purpose so they did not have to zap laser fuse elements. Laser fuse elements are pretty large, and special equipment is needed. A poly fuse can be programmed by voltages and currents, so could be made to be programmed by a tester through normal means of providing signals and voltages. Poly fuses are not all that reliable, so you really need two into an gate to be sure that you can get at least one to program. They are also large, as well. Regardless, having a good NV RAM cell, or a fuse cell is beneficial to ASICs, FPGAs, memories, etc. as redundancy can be controlled if it is deemed to be beneficial. Often assumptions about yield are made to justify redundancy. That is one of the most dangerous gambles you can take, as foundries are highly motivated to improve their yields. So if the method of redundancy is improving yields when intrinsic yield is poor, it turns out it is decreasing yield (because of the wasted space) when yields become good. Hope that helps. Austin Tim wrote: > Austin Lesea wrote: > > >>We can get an arbitrarily high coverage by just increasing our >>patterns (99.9%+) for 0 added silicon cost. ASICs can not do that. > > > Do FPGAs typically have significant 'hidden' test structures on board? > Did they in the past? > > I'm pretty much Xilinx in my DNA, but as I recall Altera used to > advertise a spare-column arangement to improve yield, as in DRAM. > I guess this would be more or less impossible with ASICs, and maybe > impossible with modern FPGAs. > >Article: 73967
General Schvantzkoph <schvantzkoph@yahoo.com> wrote in message news:<pan.2004.10.01.14.16.37.467412@yahoo.com>... > On Fri, 01 Oct 2004 06:18:58 -0700, Jon Beniston wrote: > > >> Here are some numbers: > >> ASICS are only for extreme designs: > >> extreme volume, speed, size, low power > >> Cost of a mask set for different technologies: > >> 250 nm: $ 100 k > >> 180 nm : $ 300 k > >> 130 nm: $ 800 k > >> 90 nm: $1200 k > >> 65 nm: $2000 k > >> plus design, verification and risk. > >> > > > > Hmm. Take those figures with a pinch of salt. At the higher > > geometries, you can definetly get much cheaper than that. > > > Those figures are for pure ASICs, what are the costs for structured ASICs? I don't know much about <= 90nm, but if your paying $300k for 180nm you're getting robbed. Structured-ASIC NRE is significantly less, say $40-70k depending on process and vendor. Cheers, JonArticle: 73968
GS, Well, I looked at this all day, and felt I had to say something. See below, Austin General Schvantzkoph wrote: --snip--- > Peter, > > While all of those structures that you mention are as efficient as an > ASIC, and probably more efficient because Xilinx can afford to spend more > to optimize them, the fact remains that most of the resources on an FPGA > are unused in any particular design. No argument here. In fact, somewhere between 3 and 7% of the memory cells are actually used to determine the user logic pattern. That is why the SEUPI (single event upset probability impact) factor can be from 10 to 100 (the factor that calculates on the average how many single event upsets from cosmic rays are needed to actually create a fault in the user pattern). Some resources, like multipliers, are > only used for certain types of applications. I've been using Xilinx FPGAs > since the 3000 series, I've never needed a multiplier once in all of those > years. Some resources, like the PPCs, are useful but no one needs as many > as many as Xilinx puts on a chip. The Virtex2P has up to four PPCs, who > needs four? Not using all four? Wow. Are you not in tune with the times. If PPC's are free, why not use them? As it turns out, we also had our doubts, but leave it to the creativity of the engineers out there: if it is in the chip, and can be used, it will be used. In a recent customer visit, the sales folks said "here is a customer that doesn't use any PPC's." We go in, and it turns out every design now uses every PPC. Amazing. Virtually every embedded system that I've worked on has one > processor, generally an IBM 405 or equivalent, but none has ever had more > than one. Sure, that is because they are a big expense. If they are free, then guess what? Folks use them. Xilinx put four on the Virtex2P because silicon guys made the > decision, not system designers. Wrong. Folks with some vision and understanding made the decision. Three out of four of the PPCs are just > taking up space and burning power, they are useless. Some resources are > useful for every design, like Block RAMs, but most of the time you don't > use all of them. True, but it is remarkable how useful even BRAMs can be when you do not think they have any uses. See Peter's usage of a BRAM as a state machine, or other uses to replace logic with a big LUT. On Spartan3 there isn't a lot of Block RAM available so > you tend to use most of the RAMs, although here you generally reserve > several for ChipScope, but the big Virtex (2, 2P, 4) class parts there is > so much RAM that you are unlikely to need all of it. True. I still would like to know when enough is enough. I think V2P has too much BRAM. We may have gone overboard. Be the first time in history that anyone anywhere had too much memory. If true, it will be 66 point type headllines in all of the press: "Too Much Memory on FPGA!" And some resources > are hugely under used by there very nature, i.e. the interconnect. Most of > the area in the FPGA sections of the FPGA is devoted to programable > interconnect. Only a few percent of the interconnect resources can be used > in any particular design, that's the nature of the beast. If you have a > mux in a crosspoint switch that has 8 inputs, 7 out of 8 inputs must go > unused. There isn't any way to get around this in a programable device, if > you cut down on the interconnect resources the part becomes unroutable. Yes, that is obvious. It is also the reason why folks were absolutely certain that FPGAs would never get anywhere at all. Whoops! Were they wrong, or what? An > ASIC that uses metal to make connections has a 50 to 1 advantage over and > FPGA in this area. Maybe 20:1. Don't get so excited. No one will ever cause ASICs to go away, but the number of ASIC design starts is diminishing steadily every year. Even structured ASIC starts are an insignificant factor. The place where FPGAs win big is NRE, an FPGA design > costs millions less than an ASIC design. Even if you take out the cost of > the mask set there is a big difference because you don't have to spend as > much on verification on an FPGA design as on an ASIC. Even though you should (spend money on verification). People do not. With an FPGA you can > go into the lab with a design that's almost right because fixing the bug > is cheap, with an ASIC you have to have a design that's completely right > before you build it because you can't fix it later. Again, fixing it as it is shipped is not a way to run a company. Not cheap if you ship 10,000 that you later have to retrofit (reprogram). But if it provides a competitive advantage, people will take advantage of it. Clearly if you are > building a small number of systems, a few thousand or less, you want to > use FPGAs. If you are building a huge number, hundreds of thousands or > more, you want an ASIC. In between is a grey area that has to be evaluated > for each system. > There are systems being sold in the hundreds of thousands that do use FPGAs. The reasons? Sometimes their market is still evolving. Sometimes their product has incremental features available later in time (steady source of revenue, not a bad business model?). Sometimes they just don't have the time or energy to make an ASIC (too busy working on the next product as taking time to save some pennies might mean they go out of business). Sometimes they have tried to make an ASIC, and failed. This last case is one that is now becoming more common. ASIC design is really tough (we know!!!). Especially if you need ultra-deep sub-micron technology.Article: 73969
Hi, No config that I'm aware of other than in the SOPC builder wizard. I looked in the .ptf file and didn't see any additional settings that looked interesting. Ken "Markus Meng" <meng.engineering@bluewin.ch> wrote in message news:aaaee51b.0410010914.3916acce@posting.google.com... > Hi, > > I expected that you can initialize the burst > lenght after SDRAM startup. I just wonder what happens > in case you have a nios-II cache miss. > However I will have to spend some time simulating the whole bunch ... > > Configuration of SDRAM burst length is a normal step in > using SDRAM's, isn't it? > > Best Regards > Markus > > zohargolan@hotmail.com (zg) wrote in message news:<e24ecb44.0409301615.7b8bccaf@posting.google.com>... > > Hi Ken, > > > > Thank you for your response. > > What frequncy are you running the NIOS? I tried simulating at 72MHz > > and I got only 2 words bursts. > > Are you using NIOS II? > > > > Regards, > > Zohar > > > > "Kenneth Land" <kland_not_this@neuralog_not_this.com> wrote in message news:<10lg0aca67jseaa@news.supernews.com>... > > > "zg" <zohargolan@hotmail.com> wrote in message > > > news:e24ecb44.0409241455.340ba130@posting.google.com... > > > > Hi All, > > > > > > > > I am trying to use the SDR SDRAM controller that is comming with the > > > > NIOS II development package. In the simulation it looks like this core > > > > supports only 2 words bursts. I couldn't find anything in the > > > > documentations. > > > > Am I correct? > > > > If this core supports bigger bursts then 2 words, any ideas what am I > > > > doing wrong? > > > > > > > > Thank you all > > > > Zohar > > > > > > Zohar, > > > > > > I'm working on a design that uses one 16MB sdram chip for most of its > > > instruction and data memory. I rely heavily on dma and other burst reads > > > and writes (ie cache) to get the performance I need. > > > > > > The sdram controller you're talking about is good enough to burst 480 32 bit > > > words in under 485 cpu clocks. It's a beautiful thing to watch in Signal > > > Tap as I get this performance. (Haven't explored the lenght limits above > > > 480 - my external fifo's AlmostFull level) > > > > > > I'm hammering that sdram (through the Altera sdram controller) nines ways to > > > Sunday and it performs flawlessly. > > > > > > I have some serious issues with the whole NiosI/II chain, but the sdram > > > controller has been a champ. > > > > > > KenArticle: 73970
Well, if we knew a way to replace the configuration storage latches with NV cells of the same size, reliability, unlimited re-writability, and fast speed (?), without a negative impact on the processing complexity or yield, you bet we would do it, whether it's needed for security or not... But... Peter Alfke > From: "superfpga" <superfpga@pacbell.net> > Organization: SBC http://yahoo.sbc.com > Newsgroups: comp.arch.fpga > Date: Fri, 01 Oct 2004 19:44:00 GMT > Subject: Re: NV on-chip memory? > > Nicholas - > > Regarding point d - I agree depending on your cost threshold for security > and purpose. S/W code flexibility? Cost of Battery+NV chip = 20% premium > on $10 12K Lcell low cost device? I think decision will vary situation by > situation. > > For points a and b - regarding rewriteable flexibility - despite the value - > I have been told by many FPGA Vendor employees that something like 50% of > FPGA users do not provide means for in-field configuration data upgrades. > For these customers I thought that on-chip OTP NV data would provide extra > storage utility since they do not have the external NV chip. Thoughts? > > > > > > "Nicholas Weaver" <nweaver@soda.csua.berkeley.edu> wrote in message > news:cjk0pe$fv6$1@agate.berkeley.edu... >> In article <29f7d.3570$JG2.795@newssvr14.news.prodigy.com>, >> superfpga <superfpga@pacbell.net> wrote: >>> Paul- >>> you make some good points in your discussions especially about the $1 > extra >>> cost compared to a $100 FPGA as well as highlighting the significance of >>> saving that same $1 if it was in a Cyclone low cost device. >>> >>> However, how would you change your mind if some of the newer developing > NV >>> memory technologies were similar in size to flash cell/dram cell (<< area > of >>> SRAM), also used standard leading edge CMOS process (unlike Flash or > DRAM), >>> thus much more cost effective bulk memory without adding any process > premium >>> to the rest of the chip? Though maybe not the holy grail, don't you > think >>> it provides a good amount of value as cited by some of Hals, Jim's and >>> Nicholas' posts? There are probably even other ideas out there for > value. >> >> a) I doubt it would come without a process penalty in the .13 or 90nm >> node (or the equivelent at a given point in time). If it could, that >> might be another story. >> >> b) If it could, it would need to be many times rewriteable to be >> acceptable to the FPGA companies. One of the real advantages of the >> FPGAs over other technologies (antifuse, etc) is the unlimited rewrite >> capability. THus a write once or limited write memory would be far >> less useful. >> >> c) My point is that the security aspect of embedded NVram is >> significantly less when compared to using the existing SRAM-based >> bitfile encryption to protect a large external nonvolatile store. >> So I'm a naysayer on NVram. >> >> -- >> Nicholas C. Weaver. to reply email to "nweaver" at the domain >> icsi.berkeley.edu > >Article: 73971
http://uclinux.openchip.org/forum/viewtopic.php?t=4 here is proof :) the opensource version as available from opencores is not yet good enough to run uCLinux - well lets see if that will change, I think there is a wish to have an open-souce multi-vendor FPGA softcare capable to run uCLinux inside many people :) AnttiArticle: 73972
He is a friend who uses the same ISP when he is in town. "Rotund Phase" <rotund_phase@hotmail.com> wrote in message news:2s3g5jF1fsitlU1@uni-berlin.de... > My bad Craig, just thought that as you and that other Guy guy appear to be > posting from the same IP address you might know him. > "CraigR" <craigra@pacbell.net> wrote in message > news:RLZ6d.4747$nj.3494@newssvr13.news.prodigy.com... >> Nope. I did see that posting though. I am weighing cost benefits of >> ASIC >> for field upgradability. >> >> >> "Rotund Phase" <rotund_phase@hotmail.com> wrote in message >> news:2s31qkF1furbmU1@uni-berlin.de... >> > Are you also guys@altera.com? From the 'NV on-chip memory' thread? >> > "CraigR" <craigra@pacbell.net> wrote in message >> > news:NbX6d.4661$nj.3114@newssvr13.news.prodigy.com... >> >> In reviewing a specific requirement, my design team is debating the >> > benefits >> >> of in-field hardware upgradability. In the communications space, >> > wondering >> >> if most system developers require and use ICR in production when >> >> implementing FPGA (instead of ASIC)? >> >> >> >> Craig >> >> >> >> >> > >> > >> >> > >Article: 73973
Hello i need to implement gigabit ethernet in FPGA.. now all is almost done (MAC in FPGA , PHY), and need to implement a simple protocol that will ensure proper delivery of all data from digital camera (1 picture = 128MB). I thought about IP/UDP and own protocols with handshake and retransmition of lost packets, but maybe there exists something simpler - already used and implemented protocols - i just need to make programmer's life easier at the other end of cable:) Regards GregArticle: 73974
> The place where FPGAs win big is NRE, an FPGA design >> costs millions less than an ASIC design. Even if you take out the cost >> of the mask set there is a big difference because you don't have to >> spend as much on verification on an FPGA design as on an ASIC. > > Even though you should (spend money on verification). People do not. > > With an FPGA you can >> go into the lab with a design that's almost right because fixing the >> bug is cheap, with an ASIC you have to have a design that's completely >> right before you build it because you can't fix it later. > > Again, fixing it as it is shipped is not a way to run a company. Not > cheap if you ship 10,000 that you later have to retrofit (reprogram). > But if it provides a competitive advantage, people will take advantage > of it. I didn't say no verification, I would never go into the lab with a design that hadn't been verified. The difference is that you can spend a few weeks to a couple of months simulating an FPGA and then go into the lab and run it in a real system at the real clock speed and get the last bugs. In big systems the bit patterns should be loaded by the software not by serial proms, if a bug crops up you rev the software. Everyone expects software to be buggy so no one blinks an eye if there is a new software release, let the software guys take the blame. In my experience the software guys feel so guilty about bugs that it's hard to convince them that it's not their fault. If a system can't be field upgraded then you have to test the hell out of it but you do that on the real hardware. Either way you are in the lab a full 12 to 18 months earlier with an FPGA then you are with an ASIC.
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z