Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
In article <35D098E9.764ABCC0@nt.com>, Catalin <baetoc@nt.com> wrote: >Keneth, >1/4+1/16+1/64+1/256+1/1024+...=1/3. So shift your number by 2, 4, 6, 8, etc. >and add everithing. >Catalin Baetoniu This looks good, BUT it loses by rounding down. For example, dividing 3 will get 3/4 + 3/16 + ..., and this will end up being 0. Nor does adding 1 help. Take 9; even 10/4 + 10/16 + ... yields 2 + 0 = 2, instead of 3. Nor does simply going in the other direction make it easy. We have 1/3 = 1/2 - 1/8 - 1/32 - ..., But doing it this way does not work either; try it for 2 and 8. I believe that there are better ways, but it is necessary to be careful. >Kenneth W. Wagner wrote: >> I need to implement in an fpga an algorithm that will divide an integer >> by 3. The dividend length is still to be determined but will be >> somewhere between 20 and 30 bits, and the divisor is always the number >> 3. >> Does anyone know an efficient combinatoric algorithm that can accomplish >> this? >> Thanks in advance, >> Ken -- This address is for information only. I do not claim that these views are those of the Statistics Department or of Purdue University. Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399 hrubin@stat.purdue.edu Phone: (765)494-6054 FAX: (765)494-0558Article: 11451
Herman Rubin wrote: > In article <35D098E9.764ABCC0@nt.com>, Catalin <baetoc@nt.com> wrote: > >Keneth, > > >1/4+1/16+1/64+1/256+1/1024+...=1/3. So shift your number by 2, 4, 6, 8, etc. > >and add everithing. > > >Catalin Baetoniu > > This looks good, BUT it loses by rounding down. For example, dividing > 3 will get 3/4 + 3/16 + ..., and this will end up being 0. > > Nor does adding 1 help. Take 9; even 10/4 + 10/16 + ... yields > 2 + 0 = 2, instead of 3. > > Nor does simply going in the other direction make it easy. We have > > 1/3 = 1/2 - 1/8 - 1/32 - ..., > > But doing it this way does not work either; try it for 2 and 8. > > I believe that there are better ways, but it is necessary to be > careful. Depends on how much precision you are carrying! For your example of 3, if you have no bits below the radix point, then you will get zero (this is an LSB error). For three bits under the radix point, you get 3/4 + 1/8 + 0 = 7/8. If you had four bits below the radix point, then you get 3/4 + 3/16 +0 = 15/16. If you have 8 bits below the radix, you get 3/4+3/16+3/64+3/25 + 0 = 255/256. All of these results are an error of one LSB. -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://users.ids.net/~randrakaArticle: 11452
For a totally hardware solution, the traditional butterfly calculation may not be the best approach. One alternate approach is found by realizing that the multiplications in the butterfly are performing a rotation of a complex vector. This can be done by doing a cordic rotation instead, and it uses less hardware. In a LUT based FPGA, it is also possible to use a distributed arithmetic algorithm. Les Mintzer did a paper a number of years ago on a DA implementation of a 1024 point FFT, which if I remember right, he did in a tad over 100uS. It fit easily into a 4025 device. With the newer FPGAs, That number can be significantly improved both because the speeds have increased considerably since then, and because you have many more gates to allow a higher degree of parallelism. Of course that is for a fixed point implementation (you don't want to do floating point in an FPGA if you don't have to). You can however do a block floating point implementation for relatively little extra overhead. John L. Smith wrote: > ems wrote: > > > On Wed, 12 Aug 1998 17:08:46 +0200, Thomas Focke > > <thomas.focke@himh1.hi.bosch.de> wrote: > > > > >I'm looking for a comparison regarding achievable FFT-speed between > > >FPGA vs. DSP-solutions. > > >For instance: DSP TMS C6x can manage a 1024 point-FFT in 104 µs. > > >Which FPGA can achieve which speed? > > > > First of all, you need a pipelined 32-bit FP adder, *and* a pipelined > > 32-bit FP multiplier (ie. you need an ASIC). For a 1K complex > > transform, you also need a fast 8Kbyte data cache, with an access time > > equal to the cycle time of the multiplier and adder. > > > > snip > > > In other words, using only one multiplier and one adder, on a 20ns > > cycle time, means you're running at only one-tenth of the speed > > already achievable by a commercial device. The only way to > > significantly increase speed is to have multiple FP units, which is > > what TI does. > > Evan, Have you looked at any of the papers that discuss doing FFT > in FPGA using Distributed Arithmetic methods? Is an FP multiplier > absolutely necessary, or can some of the multiplications and/or additions > be performed more efficiently in FPGA by using the 4 input LUT > structures? If there is some flaw in the papers that have been written > saying FPGAs can outperform a C80, I'm sure many folks > would like to know. -- -Ray Andraka, P.E. President, the Andraka Consulting Group, Inc. 401/884-7930 Fax 401/884-7950 email randraka@ids.net http://users.ids.net/~randrakaArticle: 11453
Herman Rubin wrote: > > In article <35D098E9.764ABCC0@nt.com>, Catalin <baetoc@nt.com> wrote: > >Keneth, > > >1/4+1/16+1/64+1/256+1/1024+...=1/3. So shift your number by 2, 4, 6, 8, etc. > >and add everithing. > > >Catalin Baetoniu > > This looks good, BUT it loses by rounding down. For example, dividing > 3 will get 3/4 + 3/16 + ..., and this will end up being 0. > > Nor does adding 1 help. Take 9; even 10/4 + 10/16 + ... yields > 2 + 0 = 2, instead of 3. > > Nor does simply going in the other direction make it easy. We have > > 1/3 = 1/2 - 1/8 - 1/32 - ..., > > But doing it this way does not work either; try it for 2 and 8. > > I believe that there are better ways, but it is necessary to be > careful. This problem is really identical to handling division by fixed constants with reciprocal multiplication instead, and this problem have been fully solved, for all possible divisors: To get an N-bit accurate division result (assuming you want C semantics, i.e. truncation), you need an N+1 bit reciprocal value, where the last bit is rounded up. (You also need effectively 2N bits in the multiplication result, but that can be handled with a N+1 bit register and starting from the LSB end, shifting the current sum down after each addition (because the fractional bits shifted out cannot contribute to any more carries). Terje -- - <Terje.Mathisen@hda.hydro.com> Using self-discipline, see http://www.eiffel.com/discipline "almost all programming can be viewed as an exercise in caching"Article: 11454
This is a multi-part message in MIME format. --------------697099339A9A019104098E99 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Ray Andraka wrote: > In a LUT based FPGA, it is also possible to use a distributed arithmetic > algorithm. Les Mintzer did a paper a number of years ago on a DA > implementation of a 1024 point FFT, which if I remember right, he did in a tad > over 100uS. Les Mintzer's paper can be found online in the ICSPAT proceedings for '96,at http://www.icspat.com. --------------697099339A9A019104098E99 Content-Type: text/x-vcard; charset=us-ascii; name="vcard.vcf" Content-Transfer-Encoding: 7bit Content-Description: Card for John L. Smith Content-Disposition: attachment; filename="vcard.vcf" begin: vcard fn: John L. Smith n: Smith;John L. org: Visicom Imaging Products adr: 1 Burlington Woods;;;Burlington;MA;01803;USA email;internet: jsmith@visicom.com title: Principal Engineer tel;work: 781-221-6700 tel;fax: 781-221-6777 x-mozilla-cpt: ;0 x-mozilla-html: FALSE version: 2.1 end: vcard --------------697099339A9A019104098E99--Article: 11455
Hi, Im looking for the book by Ed Karalis "Digital Design Principals and Computer Architecture" Its for a class but I'm looking to find it used as I don't want to spend $100 on a book for one course. Thanx, Dan nadson@aol.com -----== Posted via Deja News, The Leader in Internet Discussion ==----- http://www.dejanews.com/rg_mkgrp.xp Create Your Own Free Member ForumArticle: 11456
Hi, my name is Scott Campbell, and I will be graduating this June (99) with my BS in computer engineering from University of California Davis. I am curious if any of you have insight on what kind of salary range I could expect as an entry level ASIC/FPGA engineer. Any help you can give would be appreciated.Article: 11457
Hi I am looking for a SCSI core for an ALTERA flex device. Any leads? Thanks! -- Jason CaulkinsArticle: 11458
Joseph H Allen wrote ... >RAMBUS terrifies me because I would not be able to hook them up to FPGAs >as I can with SDRAMs. Also they actually require a protocol. Often I can >do a FIFO-less continuous data design with SDRAMs, but I doubt this will be >possible with RAMBUS. Anyway, I hope RAMBUS goes away. Interesting topic! Interfacing to SDRAMs at full speed, sans PLLs, seems tricky -- e.g. Brad Taylor's XCELL article "XC4000XL FPGAs Interface to SDRAMs at 100 MHz" (http://www.xilinx.com/xcell/xl28/xl28_25.pdf) describes fairly narrow margins :- "...Tco + Tsu = 6.0 ns + 3.0 ns = 9.0 ns (This allows 1.0 ns slack for board delay and clock jitter.)..." ... "... requires the board delay to compensate for clock jitter...". I think DRDRAM will be a common DRAM interface. So I wonder -- wishful thinking -- *could* FPGA vendors license the RAC cell (http://www.rambus.com/presentations/controller_design/sld001.htm) and build on that to provide an on-chip, dedicated, easy-to-use DRDRAM controller? For example, for a hypothetical XC4000- or ORCA-like device, this could be a configurable DRDRAM interface adjacent to the right edge IOBs, with these ports: *** dout[n-1:0], din[n-1:0] (n = 16, 18, 32, 36, 64, 72, even 128, 144) -- sink and source DRDRAM data bits (through FIFOs :-) ) -- replaces corresponding n IOBs, using their programmable interconnect (including IOBs' longline TBUFs). Subsumes either n fixed IOBs, or n/k programmable groups of k IOBs, or n programmable arbitrary sequential IOBs (a la XC6200 programmable row scatter/gather idea). *** addr[m-1:0] (m = 16 or 32) -- provide word or block burst address in one (addr[31:0]) or two (addr[15:0]) cycles, either on dedicated addr[] port or multiplexed onto dout[]. *** cmd[5:0] -- commands to reset, read/write/stream data into/out of fifos, open/close banks, precharge, masks, whatever, but keep it simple! *** clk -- the above signals are synchronous to clk, which is also multiplied up (by some programmed constant) using a hypothetical integrated DRDRAM clock generator (PLL) to <= 800 MHz. This on-chip DRDRAM controller interface might well be easier to design to than is off-chip SDRAM today. And this hypothetical FPGA, configured with a carefully floorplanned and pipelined 100 MHz, 64-bit datapath design, could read 8 bytes and write 8 bytes per clock and thus consume the full DRDRAM channel bandwidth. Another application of FPGA RAC cells: virtual wires: you could (awkwardly) stitch FPGAs together through RACs at 20 Gb/s/RAC (http://www.rambus.com/presentations/controller_design/sld034.htm). Speaking as an FPGA user, DRDRAM support seems cool, but then I don't really understand the issues of economics, marketing, integration, clocking, RAC at FPGA configuration time, RAC initialization and configuration, testing, testing, testing, packaging, end-user board layout, debugging, etc. (Also, is it likely that a DRDRAM clock generator (http://www.rambus.com/presentations/controller_design/sld027.htm) can also be integrated into our hypothetical FPGA+RAC device or does it require a somewhat different process?) Thanks. Jan GrayArticle: 11459
This is a multi-part message in MIME format. --------------8662EC91FF34A3F2F9B6526C Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit HI, I'm looking for some FPGA applications for PCI interface core. And I'm curious about that what is a rule for estimate PCI core's performances. If it exists, is it differs between Target and Master? And how long it takes if one would try to make that core with HDL ?(each case of professosionals, and grad. students, and professors...^^) Thanks... --------------8662EC91FF34A3F2F9B6526C Content-Type: text/x-vcard; charset=us-ascii; name="likepunk.vcf" Content-Transfer-Encoding: 7bit Content-Description: Card for YongKook Kim Content-Disposition: attachment; filename="likepunk.vcf" begin:vcard n:Kim;YongKook tel;fax:82-2-709-2590 tel;home:82-2-3416-0579 tel;work:82-2-709-2591 x-mozilla-html:TRUE org:Dankook Univ. ;Dept. of Electronics E. VLSI Design Lab. adr:;;#8 Hannam-dong ,Yongsan-gu;Seoul;;140-714;KOREA(South) version:2.1 email;internet:likepunk@dankook.ac.kr note:Samsung S/W Membership ASIC Lab. fn:YongKook Kim end:vcard --------------8662EC91FF34A3F2F9B6526C--Article: 11460
I had assumed that the original poster was interested in floating point, since he gave a figure for TI's C6x, and so DA wouldn't be appropriate. Mintzer's paper from ICSPAT '96 appears to give a time of 1.6milliseconds for his 16-bit fixed-point non-complex FFT (16 cycles/butterfly * 5120 butterflies * 20ns cycle time). If this is representative, then it's still way off the pace for dedicated fixed-point devices. Sharp's 24-bit device (LH9124) was doing 53 microseconds some years ago, using block "floating point" (and, IIRC, it has to use 60-bit internal precision to maintain the pretence of floating point). The problem with the FFT is that it's so well-defined that lots of people have done custom silicon for it, so it's difficult for an FPGA to keep up (or impossible, for proper floating point). EvanArticle: 11461
$75K if you can show University experience in FPGA design. ...How about that! Scott Campbell wrote in message <35D754DC.F8B1954A@earthlink.net>... >Hi, my name is Scott Campbell, and I will be graduating this June (99) >with my BS in computer engineering from University of California Davis. >I am curious if any of you have insight on what kind of salary range I >could expect as an entry level ASIC/FPGA engineer. Any help you can give > >would be appreciated. >Article: 11462
> And I'm curious about that what is a rule for estimate PCI core's > performances. If done correctly, performance can equal any 33MHz 32bit ASIC available. > If it exists, is it differs between Target and Master? Yes. > And how long it takes if one would try to make that core with HDL ?(each > case of professosionals, and grad. students, and professors...^^) HDLs can be a poor choice for PCI development in an FPGA, so it could take forever ;-) It is hard to judge just how long a PCI interface could take, as it depends on the requirements you have for the back end interface. 'Basically' the PCI interface design is only %25 of the total work. Placing and routing the PCI interface is another %25, the back end design is %25, and placing and routing the back end can be around %25. Austin Franklin darkroom@ix.netcom.comArticle: 11463
Herman Rubin wrote: > In article <35D098E9.764ABCC0@nt.com>, Catalin <baetoc@nt.com> wrote: > >Keneth, > > >1/4+1/16+1/64+1/256+1/1024+...=1/3. So shift your number by 2, 4, 6, 8, etc. > >and add everithing. > > >Catalin Baetoniu > > This looks good, BUT it loses by rounding down. For example, dividing > 3 will get 3/4 + 3/16 + ..., and this will end up being 0. Maybe I was not clear enough. By 3 * 1/4 I do not mean integer division of 3 by 4, which is of course 0, but right shift of 3 by two bits. So for 3 you get 0.11+0.0011+0.000011+ (in binary) which adds to (almost) 1 not zero. > Nor does adding 1 help. Take 9; even 10/4 + 10/16 + ... yields > 2 + 0 = 2, instead of 3. The same goes here. For 10/3 we get (again in binary) 10.10+0.1010+0.001010+... which gives you 11.01010101... as close to 10/3 as you can get. And for 9/3 10.01+0.1001+0.001001+...=10.11111... which is again almost right. By adding a 1 to the least significant factor you can force rounding instead of truncation and you get even better results. Others in this group (Erwin Oertli) have already shown how to improve this even further by reducing the number of additions required. > Nor does simply going in the other direction make it easy. We have > > 1/3 = 1/2 - 1/8 - 1/32 - ..., > > But doing it this way does not work either; try it for 2 and 8. > > I believe that there are better ways, but it is necessary to be > careful. > Catalin BaetoniuArticle: 11464
Herman Rubin wrote: > In article <35D098E9.764ABCC0@nt.com>, Catalin <baetoc@nt.com> wrote: > >Keneth, > > >1/4+1/16+1/64+1/256+1/1024+...=1/3. So shift your number by 2, 4, 6, 8, etc. > >and add everithing. > > >Catalin Baetoniu > > This looks good, BUT it loses by rounding down. For example, dividing > 3 will get 3/4 + 3/16 + ..., and this will end up being 0. Maybe I was not clear enough. By 3 * 1/4 I do not mean integer division of 3 by 4, which is of course 0, but right shift of 3 by two bits. So for 3 you get 0.11+0.0011+0.000011+ (in binary) which adds to (almost) 1 not zero. > Nor does adding 1 help. Take 9; even 10/4 + 10/16 + ... yields > 2 + 0 = 2, instead of 3. The same goes here. For 10/3 we get (again in binary) 10.10+0.1010+0.001010+... which gives you 11.01010101... as close to 10/3 as you can get. And for 9/3 10.01+0.1001+0.001001+...=10.11111... which is again almost right. By adding a 1 to the least significant factor you can force rounding instead of truncation and you get even better results. Others in this group (Erwin Oertli) have already shown how to improve this even further by reducing the number of additions required. > Nor does simply going in the other direction make it easy. We have > > 1/3 = 1/2 - 1/8 - 1/32 - ..., > > But doing it this way does not work either; try it for 2 and 8. > > I believe that there are better ways, but it is necessary to be > careful. > Catalin BaetoniuArticle: 11465
I believe the other response to be quite a bit 'optimistic' ;-) I would expect $40-$60K depending on your 'real' experience. Within two years, after you've actually done some 'real' projects succssfully, you could expect to move up quite a bit (50-70K+). Look in the back of EE-Times for salaries...they advertise for ASIC engineers all the time, also look in the help wanted section of your major city news paper (San Francisco/San Jose would be best), and you'll get a better picture of 'salary reality'. Austin Franklin darkroom@ix.netcom.com Scott Campbell <sjcampbe@earthlink.net> wrote in article <35D754DC.F8B1954A@earthlink.net>... > Hi, my name is Scott Campbell, and I will be graduating this June (99) > with my BS in computer engineering from University of California Davis. > I am curious if any of you have insight on what kind of salary range I > could expect as an entry level ASIC/FPGA engineer. Any help you can give > > would be appreciated. > >Article: 11466
We have two Data I/O Chiplab 48 "project" programmers, which are nice units. But they are unusable under Windows NT due to the dreaded port access problem. Data I/O's UK agents state that no software upgrade will allow function under NT, and we've tried various tricks involving public-domain drivers such as giveio.sys without success. Anybody got any bright ideas? If not, we'll have to maintain a couple of Win95 machines just for these units. Note this query is cross-posted - remove cross-posting of replies if you feel so inclined! TIA -- Tim Forcer tmf@ecs.soton.ac.uk Department of Electronics and Computer Science The University of Southampton, UK The University is not responsible for my opinionsArticle: 11467
We have 6 ASIC and FPGA openings in Orange County, CA. (LA area) For non US residents we provide US work visa Please contact us for futher details. Gary Gary N. Lang Vice President of ACD,Inc. E-mail: garynlang@aol.com Fax: 1-949-362-8046 (USA)Article: 11468
ems wrote: > > On Wed, 12 Aug 1998 17:08:46 +0200, Thomas Focke > <thomas.focke@himh1.hi.bosch.de> wrote: > > >I'm looking for a comparison regarding achievable FFT-speed between > >FPGA vs. DSP-solutions. > >For instance: DSP TMS C6x can manage a 1024 point-FFT in 104 µs. > >Which FPGA can achieve which speed? > > Forget it - this is way out of the league of an FPGA. I'm guessing > that the C6x time is for a 32-bit floating point complex FFT, without > bit-reversal. The C6201 is a fixed point processor. TI claims that a 1024 point, complex transform can be done in 66 uS using a Radix 4 algorithm as opposed to 104 us for the Radix 2 algorithm. This does make full use of the two MAC units on board and is done in very highly optimized hand coded assembly. It also assumes that the processor is running at the full 200 MHz and does not block for instruction fetches. In a real machine, you will likely see lower performance due to the limitations of I/O on the C6x family. The floating point C67x branch of this family is claimed to calculate a floating point FFT in 108 us using the Radix 4 algorithm or 124 us using a Radix 2 algorithm. This processor will have many of the same speed limitations from I/O as the fixed point versions. > First of all, you need a pipelined 32-bit FP adder, *and* a pipelined > 32-bit FP multiplier (ie. you need an ASIC). For a 1K complex > transform, you also need a fast 8Kbyte data cache, with an access time > equal to the cycle time of the multiplier and adder. You only need FP units if you need FP math. The fastest FP DSP currently is the Sharc which produces a result in 460 us, a factor of close to 10 slower than the C6201. > There are a number of ways to do FFTs, but a straightforward radix-2 > butterfly will require 6 cycles, with the adder producing a new result > on every cycle, and the multiplier producing 4 new results. You also > have to get a lot of data in and out of this mess on every cycle. This assumes that your adder and multiplier are not pipelined. Ether one can be pipelined to produce results with a faster clock. Or you can use 6 adders and four multipliers to produce an FFT result on every clock cycle as does the Sharp device. > For a 1K transform, you repeat the butterfly 5120 times, giving a > total of 30,720 cycles, or 614us for a 20ns cycle. You then double > this for complex data, giving over 1.2ms, without bit-reversal, > compared to TI's time of approx. 100us (and, in practice, there will > probably be additional overhead related to getting data in and out of > cache). > In other words, using only one multiplier and one adder, on a 20ns > cycle time, means you're running at only one-tenth of the speed > already achievable by a commercial device. The only way to > significantly increase speed is to have multiple FP units, which is > what TI does. I looked at doing all this in an ASIC a few years ago, > but the costs were prohibitive, and the performance wasn't up to it. > > Evan > > PS - anyone out there need someone to do an FFT ASIC? Mail me at the > address above, minus the 'nospam'... :) I haven't researched this in a few years, but it seems that there is a new FFT chip out every 4 or 5 years that is just a bit faster than the current DSP chips. It would appear that the Sharp chip is getting a little age on it. Does anyone know of a faster dedicated chip in commercial production? From the work I have been doing with Xilinx XL chips, I would hazzard a guess that with careful pipeline design you could get the clock speed up towards 80 or 100 MHz. Certainly there is enough I/O to read and write two words on every clock along with the twiddle factors. I don't remember how fast the Sharp chip runs. If it runs at 40 to 50 MHz, it would seem that you could double the performance with an FPGA approach. To achieve the same speeds with DSPs, you would need to use multiple DSPs along with multiple memories...etc. So, there may even be some cost advantages to a "custom" FPGA approach if you consider the total cost of the support devices and real estate. -- Rick Collins redsp@XYusa.net remove the XY to email me.Article: 11469
My FPGA design team is located in Denver, CO. We work for our customers on an outsource from this location. FPGA-Xilinx, Altera, Actel. regards, Blake Nelson nelson@cstn.com (303)948-0482Article: 11470
XaQti Employment Opportunities XaQti Corporation, a fabless semiconductor company located in North San Jose, is a technology leader in the emerging gigabit Ethernet marketplace. XaQti is an equal opportunity employer (EOC). If you are interested in participating in a dynamic and rapidly growing company and you think you are qualified for any of these positions, please send your resume to: ravi@xaqti.com Listings by Job Title: Tactical Marketing Manager Job Description The professional we seek will assume responsibility for the planning, development, and introduction of semiconductor products. Additional responsibilities will include scouting potential competitors, analyzing new market opportunities as well as visiting OEM customers, forecasting, supporting and training the sales force and making technical product presentations. Other responsibilities include developing marketing strategies, and developing business cases for product opportunities. Assist Sales in securing design wins. Train field application engineers as and when required. Work very closely with Design Engineering and Software teams to produce data sheets for new products. Assist in the definition of features and functions for new product development. Travel to customer locations as necessary. Requirements The ideal candidates will have 4+ years related semiconductor experience in creation of product strategies, product planning, and tactical marketing. A four year marketing or technical degree, or equivalent experience, is required; MBA preferred. Networking and Ethernet knowledge is highly desired; Good oral and written communication skills; Ability to travel domestically and internationally. Technical Marketing Job Description This professional will support Marketing in technical marketing activities including trade shows, direct mail, web site, literature and technical papers. This individual will contribute to and monitor technical accuracy of product collateral, press releases, benchmark reports,as well as, provide significant technical and collateral support on product launches. Recommend positioning for company and products. Other responsibilities include generation of sales leads, support of collateral support of partners and business relationship, may be asked to attend trade and industry meetings such as trade shows, product roll-outs or product seminars and event promotion. Requirements The ideal candidates will have related network or semiconductor experience BA/BS or equivalent with 5-7 years experience in high tech marketing, including marketing communications, product launches and tactical marketing. Candidates with hands-on experience in website and datasheet creation would will be given a preference. Good written communication skills; Ability to travel domestically and internationally on an occasional basis. Design Engineering Member of Technical Staff Job Description Be a principal technical contributor to IC/ASIC development. Collaborate with software, architecture, systems hardware and applications groups to define products and help drive all phases of chip development. Provide technical leadership for the IC engineering group. Requirements BSEE/MSEE or equivalent and seven years experience designing IC’s in the networking/data-comm. industries. Working knowledge of LAN, networking protocols, Ethernet, fast Ethernet and gigabit Ethernet. Demonstrated command of logic design principles, state machines and software/hardware partitioning tradeoffs. Senior ASIC Design Engineer Job Description Contribute to, participate in and lead some phases of IC/ASIC development. Provide technical leadership in instituting development tools infrastructure and methodology for IC engineering group. Requirements BSEE/MSEE or equivalent and five years experience designing IC’s in the networking/data-comm. industries. Working knowledge of LAN, networking protocols, Ethernet, fast Ethernet and gigabit Ethernet. Experience working with turnkey ASIC and/or COT foundry services. Senior Hardware Systems Engineer Job Description Participate in all phases of systems hardware development of evaluation, demo and application platforms for XaQti’s high-speed networking silicon solutions. Drive activities including design, schematic capture, layout, fabrication, assembly and testing of PCB designs. Work with IC team to tune timing specifications for collateral and app notes. Requirements BSEE/MSEE or equivalent and five years experience in the definition, architecture, design and debug of board-level system designs in the networking and data-comm. Industries. Hands-on knowledge of state-of-the-art board and PCB design tools and methods. Hardware Design Engineer Job Description Participate in all phases of systems hardware development of all evaluation, demo, and application platforms for XaQti's high-speed networking silicon solutions. Participate in design, schematic capture, layout, assembly and testing of PCB designs. Collaborate with software and IC design teams to verify silicon-software-board functionality. Requirements: BSEE/MSEE or equivalent and three years experience designing and debugging board-level system designs in the networking and data communications industries. Hands-on knowledge of state-of-the-art board and PCB design tools and methods; knowledge of logic design principles, state machines, and software/hardware partitioning tradeoffs a plus; team player, strong written and oral communications skills; highly motivated to work in a fast-paced start-up environment. Applications Engineer Job Description Manage and provide technical support to XaQti customers. Formulate application development and support plans. Assist Sales Division in securing design wins. Provide pre-sales and post sales technical support. Train field application engineers as and when required. Work very closely with Design Engineering and Software teams to produce data sheets for new products. Assist in the definition of features and functions for new product development. Travel to customer locations as necessary. Requirements BSEE/MSEE or equivalent and four years experience in system design and/or providing technical support for semiconductors to OEM customers. General knowledge of networking hardware, software, and protocols; Knowledge of components and semiconductors used to build networking hardware/software; Ethernet knowledge highly desired; Good oral and written communication skills; Willingness to travel occasionally. If you are interested in participating in a dynamic and rapidly growing company and you think you are qualified for any of these positions, please send your resume to: ravi@xaqti.comArticle: 11471
I've got this brief snippet of code here: -- Generate the other strobes process(Reset,Clk) begin if (Clk'Event and Clk='1') then Strobes(1)<=Strobes(0); Strobes(2)<=Strobes(1); Strobes(3)<=Strobes(2); Strobes(6)<=Strobes(5); Strobes(7)<=Strobes(6); end if; end process; ...that Synopsys' FPGA Express complains: Warning: The net '/StrobeGen/Strobes<1>' has more than one driver. (FE-CHECK-5) It does this for all 5 signals that I'm using above. I can guarantee that, other than the code above, no process assigns a value to Strobes 1, 2, 3, 6, or 7. So just what is it unhappy about? Does anyone know? Thanks... ---Joel Kolstad Joel.Kolstad@USA.Net -----== Posted via Deja News, The Leader in Internet Discussion ==----- http://www.dejanews.com/rg_mkgrp.xp Create Your Own Free Member ForumArticle: 11472
Hi, everybody! I am now using Synopsys (VHDL entry) to design the Xilinx XC6200, but I have some problems to compile 30 macro (library). Do you know how to compile the XC6200 Synopsys example "multadd.vhd" ?? I have already analyze "add_uc.vhd" & "mult_uc.vhd" into the work diectory, then analyze multadd.vhd & elaborate multadd. After that , I compile multadd, but still many errors and can't save as DB or EDIF format. Do I need to elaborate and compile mult_uc(behave), mult_uc(struct), mult_uc(configuration) or add_uc(behave), add_uc(struct), add_uc(configuration)before I compile multadd.vhd? If I use command window (design_analyzer > ) and type elaborate multadd, is it same as I use FILE -> ELABORATE -> click the filename ?? There have many struct, behave & configurate files inside, do I need to compile each of them? I can successfully compile some simple vhdl program with the macro few weeks ago but they can't compile now, I guess there should be some procedure to compile the top design with macro. I have try many different combination, sometime can compile and sometime can't. I want to know the correct compile procedure. Xilinx do not have any technical support for the XC6200 software, do you know who should I contact if I have XC6200 software problem? Thank you very much in Advance !! Regards, VeraArticle: 11473
Hi, some third party is currently making a 25kGates VHDL design for us. We plan to make a few prototypes (20-50) with FPGAs and make an ASIC later on. As I have some experience with Xilinx 3k and 4k devices, I asked them to map it to Xilinx XC4062XL, which should have a sufficient size. For testing, they mapped a small design (1kG) and found that obviously no optimization is done. It seems that each and every gate is mapped to a single CLB. So we get a very high propagation delay (7-10 CLBs instead of just only one) and of course a very poor CLB usage. Are we doing anything wrong? B.t.w., the same design maps perfectly to an Altera device. Of course we want to avoid (if possible) any special language elements for Xilinx, as we will make an ASIC later on. Due to these problems I consider to use Altera instead, but I must decide quickly as I wanted to tranfer my PCB to layout until the end of this week. Any quick help is highly appreciated. Thanks in advance! -- Michael Kraemer -----== Posted via Deja News, The Leader in Internet Discussion ==----- http://www.dejanews.com/rg_mkgrp.xp Create Your Own Free Member ForumArticle: 11474
I'm making a project using Xilinx tools and one weird thing has happened : I designed the blocks in the shematic, simulated them, checked that everything was like i expected to be and .... when i loaded the code into the FPGA, nothing happened the expected ! This very strange, because it din't happened one or two times but several, it simulates one thing, and relly happens another. Sometimes entirely different. I am optimizing the design to area, i. e., making Xilinx fit the code into the FPGA (XC4006E) If someone can give me a hel, i would appreciated .... Thanks in advance, Rui Pinto
Site Home Archive Home FAQ Home How to search the Archive How to Navigate the Archive
Compare FPGA features and resources
Threads starting:
Authors:A B C D E F G H I J K L M N O P Q R S T U V W X Y Z