Playstation 3

Status
Nicht offen für weitere Antworten.
Can Sony Dominate with Cell?

SCE has begun pushing the Cell microprocessor as its next strategy. If the firm's aim can be realized, the Sony Group could become a semiconductor major.

Can Sony Computer Entertainment Inc (SCE) of Japan pull off its third major success? The first was in 1994, when the company utilized new compact disk read-only memory (CD-ROM) media to shoe-horn itself into a leading position in the home game system market, succeeding in spite of the fact that the market was almost entirely locked up by leaders Nintendo Co, Ltd of Japan and Sega Enterprises Ltd of Japan. The second success was in 2000, when the home game system market was in the doldrums with old technology, and SCE introduced the latest semiconductor technology to attain an unmovable position in the game industry even while being condemned for its "epic game" approach.

But will there be a third success? In 2005, SCE has begun pushing the Cell next-generation microprocessor as its next strategy. The Cell IC is not designed only for use in game systems, but is intended for application in everything from home servers to TVs, mobile phones and workstations. The firm also plans to aggressively push Cell on the merchant market, nurturing technology born from game systems into a platform for diverse networked equipment. If the firm's dream can be realized it will mean that the Sony Group holds a core part of the network era, which could make it into a semiconductor major. This is part of the reason that Ken Kutaragi, executive deputy president and chief operating officer (COO) of Sony Corp of Japan, always seems to mention Intel Corp of the US as a potential competitor in various developments.

Long-Term Strategy

The presentation at the International Solid-State Circuits Conference (ISSCC) 2005, where the Cell was revealed, was standing-room-only as people packed the several hundred seats for a glimpse.

Is Cell really that great? Of the chip outline presented at the conference, the audience was especially intrigued by the very high floating-point operation speed, hitting 256 GFLOPS at 4GHz.

(256 GFLOPS is over 40 times higher than the Emotion Engine mounted in SCE's PlayStation 2, and over 15 times higher than Intel's Pentium 4.)

The real quality of Cell is not in the operating frequency or number-crunching prowess of the prototype chip, however, but in the internal architecture. Advances in semiconductor manufacturing technology and the sharp rise in the number of internal operators have made this structure essential to continue to meet diversifying applications from digital appliances to computers. In addition, engineers are also working on an environment that will make it possible to network multiple Cells together to act like a single computer. The goal is to leverage the chip's flexibility and expansibility to make it a core component for the electronics industry, and keep it that way over the long term. "We wanted to make an architecture that would be valid for at least a decade," said James Kahle, IBM fellow, Broadband Processor Technology, Microelectronics Div, IBM Corp of the US, emphasizing the future-oriented design of the chip. The prototype chip is merely the first step in realizing this goal, merely a starting point.

The basic concept of Cell was firmed up in the spring of 2001, when the joint development lab was established by SCE, IBM and Toshiba Corp of Japan in Austin, Texas. SCE and Toshiba engineers flew to the US for the initial meeting with IBM on the Cell concept, meeting a host of top IBM engineers, such as people in charge of developing the POWER4 server microprocessor. The scale of the development team was gradually boosted to several hundred people, mostly engineers from IBM. The fact that IBM, the former leader in the mainframe world, contributed so heavily to the development of an IC for home game systems clearly demonstrates how the key driver in electronics technology has shifted from computers to home electronics (Figs 1 and 2).

Product Development

The disclosed specs for the prototype chip were not maxed-out data created for the conference. The development team has confirmed operation at up to 5.2GHz on the first prototype chip obtained in April 2004, but the ISSCC presentations on Cell merely stated "4GHz or higher". More than likely, the companies are expecting to use about 4GHz in actual equipment for reasons of higher IC yield, lower dissipation and simplified board design. The initial chip exhibited no problems with logical operations, and was able to boot the operating system (OS). Dissipation, however, was a major issue. Masakazu Suzuoki, VP, Microprocessor Development Dept, Semiconductor Business Div at SCE, feels that this has been resolved: "We had a difficult time reducing dissipation at the start, but finally found the solution in the second half of 2004."

Cell chips will be used in home game systems by SCE, high-definition TV (HDTV)-capable digital TVs and home servers by Sony, and HDTV-capable digital TVs by Toshiba by 2006. Hardware and software for these products is now being developed simultaneously at multiple sites in the US and Japan. Entry into the development areas is strictly controlled, so very few engineers have actually seen Cell chips in operation. In these secret labs there are development boards the size of pillows, mounting twin Cell chips with little air-cooled heat sinks small enough to sit in the palm of your hand. Development is under way on 3D graphic draw libraries for gaming, HDTV demodulation software, and more.

Leading the Era

The Cell chip is a multicore design, single-chipping the general-purpose central processing unit (CPU) core to run the OS and handle other tasks, and multiple signal processors called synergistic processing elements (SPE). The prototype chip has the IBM Power-architecture general-purpose CPU core and eight SPEs.

The circuit configuration has been simplified as much as possible so that the CPU core and the SPEs can operate together at 4GHz or higher. This is because the complex instruction scheduling that has become so common in high-performance microprocessors lately tends to boost core footprints and dissipation both.

The quantity of SPEs per Cell will vary with the performance the equipment requires and the scale of the circuits to be integrated into the chip, but will always be an even number. The CPU core is not dependent on any specific architecture, and ignoring business-related factors could easily be designed to use ARM for mobile phones and MIPS for desktop equipment, for example. In fact, IBM appears to be developing a separate Cell chip using a totally different CPU core.
The Cell design approach based on the simplified CPU core and signal processors is leading the way for design trends in microprocessors as they move towards multicore design. As Justin Rattner, senior fellow, Corporate Technology Group and senior director, Microprocessor Technology Lab at Intel explained, top people in the industry share the same opinion: "In the future, it will be crucial to design microprocessors by single-chipping multiple simple CPU cores."

Flexible Interfaces

The design approach aiming for application in diverse systems is evident in the system interface linking Cell to peripheral ICs, too. The physical layer is the FlexIO high-speed parallel transfer technology developed by Rambus Inc of the US. The interface is 12 bytes wide, with seven bytes used for output and five for input. Depending on the specific peripheral ICs used, the widths can be freely adjusted in 1-byte units, supporting a maximum of two peripheral ICs (Fig 3).

The per-pin peak data rate for FlexIO is a high 6.4 Gbits/s, which is higher than the 2.5 Gbits/s delivered by existing PCI Express serial transfer, or even 5 Gbits/s second-generation PCI Express technology. As a result, the system interface offers a peak data rate of 76.8 Gbytes/s, roughly ten times faster than the Pentium 4.

The adoption of FlexIO seems to have been due in part to the fact that it can be used with inexpensive clock ICs. This is crucial in keeping costs down in consumer electronics products costing hundreds of dollars. FlexIO incorporates a circuit to dynamically ensure clock signal jitter due to supply voltage fluctuation, making it possible to hit a per-pin rate of 6.4 Gbits/s even using clock ICs with relatively high jitter.

Swallowing ASICs

Behind this major shift in design policy are the facts that it is time for another change in architecture, which generally occurs every five years as semiconductor manufacturing technology advances, and that application-specific ICs (ASIC) for individual products pose increased development load.

In the five years since the development of the Emotion Engine semiconductor geometry has shrunk considerably. It has been possible for microprocessors on chips of given areas to boost processing performance by ten times over this period through architecture revamps. This is sufficient to even make the shift to a whole new platform worthwhile. The difference in performance between the prototype Cell and the first-generation Emotion Engine is 40x, but they are about the same size: 221mm2 for the former, and 226mm2 for the latter. This is on a par with the Pentium 4, manufactured with 180nm technology, at 217mm2.

With number-crunching performance of 256 GFLOPS, it becomes possible to implement almost all of the signal processing demanded by digital consumer electronics in software. Encoding demanded by Moving Picture Coding Experts Group Phase 2 (MPEG-2) for standard-definition TV (SDTV), for example, can be executed for several dozen streams in parallel. This means that all of the various signal processing circuits currently implemented in individual ASICs can be replaced by the Cell. For applications like mobile phones where signal processing performance does not need to be very high, the quantity of SPEs can be reduced in a special Cell, cutting chip footprint and dissipation.

Full Use of Silicon

One advantage of the Cell, which can vary the quantity of SPEs to control number-crunching capability, is that it will prove very handy in the future by providing the increasing performance digital consumer electronics needs.

Take H.264 encoding, for example. The prototype chip can handle encoding of multiple SDTV video streams in parallel, but only one HDTV stream. If HDTV imagery is being recorded to Blu-ray Disc media with H.264, for example, the system would require even higher performance in order to be able to simultaneously play a game or execute other applications. Other demands are also being raised calling for boosted performance in digital consumer electronics, such as an image recognition function to make it possible to search for a particular scene within massive imagery records.

With Cell it is possible to develop a microprocessor satisfying the requirements much faster than an ASIC, just by increasing the quantity of SPEs. A large number of signal processing operations in digital consumer electronics are executed in pixel units, making it fairly easy to execute them through parallel processing and gain maximum effect from an increase in SPE quantity.

The fact that performance can be boosted without changing chip size, just by increasing the number of SPEs, also contributes to maintaining a high capacity usage ratio at the fab. If advances in semiconductor manufacturing technology are only used to shrink chips it will be necessary to produce cheap chips in volume, increasing the time needed to recover the capital investment into the facility (Fig 4).

Hardware, Software

Cell is more than just the IC: it only achieves full performance when it is used in conjunction with the software. It will not be a trivial task to apply all the power offered by the nine processors in the Cell, including the CPU core, to add value to the host equipment. Balancing the load effectively between the cores will require writing code from a solid understanding of Cell architecture, and that means sophisticated software technology. As one engineer involved in Cell development commented, "Engineers who have only been involved in developing software for general-purpose microprocessors are going to have to relearn everything from the ground up. People who have been involved in ASIC development might be better suited to writing code for Cell."

Each company is involved in its own software development project, and it appears, for example, that multiple varieties of Linux running on Cell already exist. While the firms cooperated in the development of the microprocessor, they remain rivals when it comes to Cell-driven products in the marketplace.

While software development methodology will have to be revamped for Cell chips, once the constituent technology required for digital consumer electronics development (OS, libraries and such) is available, it should become considerably simpler to actually develop the product. More and more functions can be used in multiple pieces of equipment, including H.264 and other Codec software and graphical user interfaces (GUI). Sony is already applying this development method in TVs mounting the Emotion Engine. By utilizing software libraries originally developed for the PlayStation 2, it was able to quickly develop the GUI used in the PSX, called the cross-media bar (XMB).

Outside Sales

In parallel with the adoption of Cell chips in their own products, it seems likely that the manufacturers will begin to push sales to other firms involved in consumer electronics and computers. The more products equipped with Cell chips, the easier it will be to achieve a distributed environment via networking, and that was one of the original concepts of the Cell development plan.

The Sony Group plans to provide not only Cell, but also peripheral and graphics ICs equipped with all the needed input/output (I/O) interfaces. The strategy makes one think of an Intel for the digital consumer electronics world. The firm will probably also provide homegrown OS and software. As mentioned above, the development of Cell software will not be trivial, but for the consumer electronics manufacturers, releasing product software to the competition would be the kiss of death because, along with the software, hard-won expertise would also be transferred.

In fact, Cell is provided with a framework to prevent such expertise from escaping. A function is implemented in hardware that can make it impossible for the dedicated SPE memory space to be addressed by the CPU core. This function could be used to prevent third parties from analyzing software libraries or other code in the SPEs.

In addition to sales to the merchant market, it is also possible that the Cell system interface could be disclosed. If third-party developers provide the peripheral ICs for use with Cell, it would rapidly increase the range of possible Cell variations.

To convince as many IC manufacturers as possible to make peripheral ICs for use with Cell, one possible strategy is to release the specs free of charge, as Intel did with its peripheral component interconnect (PCI) bus and accelerated graphics port (AGP) specs. It seems more likely that the information will only be released under a license agreement, however, Sony's Kutaragi suggested.

by Rocky Eda and Tomonori Shindo

Source: http://neasia.nikkeibp.com/neasia/001090
 
PS3 GPU enthüllt?

*Hiroshige's Goto Weekly overseas news*
GPU of the WGF2.0 generation where operation above 1GHz becomes possible




--------------------------------------------------------------------------------



- Separation of programmable unit advances with fixed function

In WGF2.0 generation GPU, the coherence of Shader increased, the fact that the possibility the Shader array where architecture was standardized even physically being mounted is high the last time was explained. One of advantage of this architecture is Shader load balancing. But is not just that. Also the performance improvement with the high clock conversion of GPU, is the important point.

In WGF2.0 generation GPU, it can accelerate GPU theoretically substantially. Because that compared to time speed can convert the Shader core of GPU until now easily. Because of that, while GPU which operates above internal 1GHz is close there is a possibility of appearing.

Current DirectX 9 generation GPU is designed to be the structure where the fixed functional part and the Programable Shader part which has program characteristic are complicated. In the DirectX 9 generation as for fixed function it is to be reduced, in the WGF2.0 generation furthermore the fixed functional part is decreased, but fixed function remains as still. Because as for this, fixed functional one, directly is more efficient processing than programmable unit.

As for Programable Shader if with the same programmable operational unit as CPU, it turns fast, that much efficiency rises. In addition, operational unit itself is not to mount complicated logic with ハードワイヤド, basic operational logic is mounted because of general purpose. Because of that, acceleration by any chance inhaling. It has the character which is similar to CPU.

Vis-a-vis that, the fixed functional part has mounted the logic for specification processing as ハードワイヤド. When logic becomes complicated, speaking generally, it is difficult to increase clock frequency. On the other hand, because logic is hard converted that way, the to high clock it cannot convert and can process also the て fast. If "with Shader several cycles the processing which catches, hardware mounting it can execute in 1 cycle. The efficiency per プログラマビリティ and cycle there is a relationship of trade-off ", that a certain GPU authorized personnel talks.

In other words, as for the fixed functional part as for high clock conversion it is difficult, but it can put out efficiency low even with the clock. On the one hand, if as for Shader the same clock efficiency falls from ハードワイヤド, but it means that high clock conversion is easy.


- Clock domain of the Shader core part is designated as 2 time speed

GPU this way, has loaded together the unit where character differs, so far being dragged to the fixed functional part, it was difficult to lift the clock. GPU remained always in operational frequency of 1 of several parts of CPU, parallelism was largely detached being high, at frequency. As for this, same even in the DirectX 9 being generation GPU, as for GPU extent it is distant in the GHz stand. As for this, with special care Shader, the fact that original efficiency cannot be shown is meant. For example, when it tries that GPU of 500MHz will achieve the same logical peak efficiency as Cell which can operate 8 SIMD processors with 4GHz, 64 Shader must be placed.

But, when it becomes WGF2.0 generation architecture, this problem reaches the point where it can solve partly. In WGF2.0 generation GPU, it is presumed that it collects Shader where many GPU vendors are the programmable operational core as コンピƒ…テーションコアアレイ. With mounting such as that, it is easy to separate Shader from other unit.

When it does, just Shader array separating clock domain, the question っ callous which is operated with the clock of 2 times that other unit becomes easy. For example, as for the whole GPU with also 600MHz operation, to operate with 1.2GHz it can do Shader array. It means that the operational efficiency of Shader rises rapidly.

Separation of such clock domain in the device of SOC type is done to popular. For example, as for the PSP tip/chip the CPU core is operational with time speed of other unit.

Of course, for that, Shader, from it is necessary to modify in the design which subdivides the pipeline. In addition, because as for accelerating with just the operational efficiency of Shader does not rise, as for other efficiency, when the bottleneck are other things, either entire efficiency does not rise. But, it reaches the point where operational efficiency with respect to theory can lift at least more easily. Whether or not time fast core appears in the first generation of WGF2.0 generation GPU, it is not understood, but GPU where Shader operates above 1GHz appears probably will be in the WGF2.0 generation.


The clock of GPU of WGF2.0 age
As for PDF editionthis


- In the possibility the design of GPU changing in the long term

When Shader operational efficiency rises, there is a possibility also the design philosophy of GPU changing. As for former GPU, in order to be able to increase efficiency with also the relatively low clock, it faced to the direction which increases the degree of parallel of processing. The quantity of Pixel Shader 4 -> increased at 8 -> 16 and many fold, GPU itself had enlarged because of that.

But, in the future, when we assume that most to be required with Shader efficiency, as for Shader efficiency itself can lift with time fast conversion of the core the inevitability which increases the quantity itself of Shader fades. For example, if even with same Shader quantity 12 as the mid range GPU of present condition, it operates with time speed, per 1 clock operation of 24 Shader minute becomes possible. Because of that, with GPU below mid range as for the spirit which increases parallelism there is also a possibility of becoming dull. Though, just the high end for efficiency demand for the ceiling not knowing, the possibility of keeping raising parallelism as still is high, but.

There is a possibility also approach itself of the design of GPU changing. The design period of present GPU at the half of CPU, from development start the product appears in the market in 18 - 24 months. Because with GPU, custom in the circuit design チƒ…ーン it does not do in acceleration that time is not required for the design. For example, David B. of NVIDIA Kirk (the デビッド B kirk) the person (Chief Scientist), you talked the reason as follows 3 years ago.

If チƒ…ーン it does "in the custom design, (GPU) as for more becoming high speed is certain. Problem is that time is required for the design. For example, after with CPU of Intel and the like, アーキテクチャデザイナ finishing work, also approximately 1 - 2 years spend time on transistor tuning. When the same thing it is GPU, in 2002, it means to put out TNT2 of 2GHz ",

In other words, with GPU addition of hardware specification and change of architecture are extremer than CPU. Because of that, when it is necessary to turn the design shortly in the cycle, so does, it is the case that for accelerating チƒ…ーン how you had not been able to have done in the custom design.

But, the technical trend of GPU is changing largely. New specification is not to add with the hardware, it actualizes with the software on Shader. Because of that, with flexible it has been about it will make Shader as much as possible in general purpose and to change. And, GPU efficiency is reaching the point where it depends on the acceleration of Shader. So when it does, if the architecture whose as for the GPU manufacturer is widely used in principle you spend time on the design, チƒ…ーン do in the high clock and you reach the point where it corresponds. Perhaps until recently it keeps changing to the development cycle which differs.

When that happens, like the CPU manufacturer high speed the processor core is designed and the development position which is accustomed, the possibility of embarking on GPU development comes out. Actually, in the past, the authorized personnel of a certain CPU vendor talked the idea such as that. If approach of the CPU design is introduced, because from high speed it is possible, to make GPU, you said that you have proposed to the parent company.

In addition, even with the next generation PlayStation, there was also a project of the GPU development which designates the Cell processor as the base. If Programable Shader is made in the SPE base and the luster riser and ポストシェーダ processing unit etc. are mounted with fixed function, that is the expectation which with that has become epoch-making GPU. In the future, more and more, there is a possibility seamless conversion of the design of such CPU and GPU advancing.


- The mounting Geometry Shader which designates the common hard conversion of Shader as prerequisite?

The WGF2.0 generation GPU where structure changes on the Shader center. As for this change, more drastic function, means the fact that the possibility of keeping being mounted in the form which is actualized with Shader is high. What it suggests that is Geometry Shader and "Tessellator (plane surface divided unit)".

Geometry Shader is the Shader stage which joins anew with WGF2.0. It has become the operation stage which ジオメトリシェーダ which handles the primitive (is also the times when it is called プリミティブシェーダ and) can send the program.

As for the difference with Vertex Shader which is in the same geometry pipe, as for Vertex Shader do the processing of the primitive unit. It tears the limit of former 1 apex input -> 1 apex output, it is possible to do the metamorphosis of the primitive. The fur to which was good and certain the ふ of the CG movie the ふ pointed it is the example which with ジオメトリシェーダプログラム of offline is formed in プロシージャル. That, even with the GPU hardware becomes possible.

The problem of Geometry Shader when it tries to mount this stage Vertex/Pixel Shader as the hardware which differs physically, is the point where mounted cost increases. When necessity it places Shader hard of the peak efficiency amount which is supposed, as Geometry Shader when it becomes, the several you must increase Shader hard. When programmable stage increases to the pipeline, in order for Shader of each stage not to become the bottleneck, more and more it becomes difficult to take balance.

Even now, Vertex Shader being ガラ empty depending upon application, Pixel Shader playing, there is a case such as reaching. If Geometry Shader joins, balancing becomes more and more difficult, the wastefulness of GPU furthermore increases.

But, if it mounts, Shader as Unified-Shader which is integrated even physically story changes. Because by the fact that logic Shader is allotted dynamically, it can take load balance automatically, it becomes easy to add Shader stage anew. GPU vendor side, balance without being troubled, it is possible to mount Geometry Shader. Because of that, the mounting to WGF2.0 of Geometry Shader is presumed that it is something which designates the integration of Shader hard side/general-purpose conversion as prerequisite.


- It makes Tessellator exclusive use hard, or actualizes with Shader?

Using Shader, it can actualize also Tessellator which does plane surface division, programmable. Actually, a certain information muscle has conveyed "the tessellation which is proposed at the time of DirectX 10, it was something which uses Shader", that. There was the stage of Tessellator even in the pipeline of WGF which last year is released, but there is a possibility of being the plan that this physically is mounted as Shader.

But, Tessellator of WGF which is was out this spring "GDC (Game Developers Conference)" in the presentation of Microsoft to last autumn. If we assume that it was something where Tessellator of originally WGF uses Shader this stops being the modification of hard itself. Tessellator stage going out, it does not mean that private hard is reduced, simply, it means that it has become the cancellation to allocate tessellation task to Shader.

The detailed circumstance where Tessellator was deleted from WGF2.0, it is not understood, but with another session of GDC, as for Microsoft it mounts Tessellation, as fixed functional hard, or it mounts still programmable as hard, or it suggested that it is argument. The advantage of fixed functional hard is high is that efficiency vis-a-vis mounted cost, but there is a problem where efficiency is restricted with the throughput of fixed functional Tessellator hard. Vis-a-vis that, if programmable hard, if Shader is used, the adjustment can do also tessellation efficiency unrestrictedly. But, if it actualizes with Shader, fixed functional unit compared to efficiency falls. In addition, Microsoft to standard pointed out also that it is not converted concerning programmable mounting.

It has been related this argument, to also the performance of Shader. Can, Shader with such as time fast conversion keep increasing efficiency inside GPU, if, it becomes easy to mount also Tessellator to Shader. In that case, the basic design of WGF2.0 generation GPU without changing, can actualize Shader based Tessellator with some modification. But, when it becomes an argument that, then sufficient efficiency you cannot obtain efficiency badly, it becomes a story that we want fixed functional hard after all. Perhaps, with that, objection being put, it is presumed that mounting the Tessellator stage to WGF2.0 went out.

In any case, the point of the case of Tessellator, with GPU, mounts still new specification, with fixed functional unit, or actualizes with programmable unit, or has shown the fact that it is argument. The seed of first trouble perhaps at this point in time is for programmable GPU which aims toward the graphic processor which can do also general-purpose processing. It is possible, if we would like to keep making programmable, but we would not like to drop either high efficiency as GPU. But, complicated processing, pursuing efficiency, when it can give with fixed functional unit, it does not cut the transistor in the resource which does not have general purpose to become necessary, general-purpose conversion ratio of altogether GPU falls. Trouble of the GPU vendor may continue still.

kaigai01l.gif

Source:http://www.psinext.com/forums/viewtopic.php?t=6448
 
interessant, s'hat sicher etwas mit der ps3 zu tun, wieviel .. das werden wir wohl zur e3 erfahren :)
 
Ich habe das Gefühl das die PS3 alles wegputzen wird, nach der E3 wissen wir mehr.
 
1GHZ GPU?
Das riecht nach mindestens einer generation technischem Vorsprung gegenüber M$.
 
Das sie PS3 mehr Leistung als die X2 haben wird war klar, aber das es so klar ist, hätte ich nicht gedacht. Muss aber auch nicht stimmen.

warten auf der E3
 
TheTruth:

Danke für diese superinteressanten Informationen. Es sieht immer mehr so aus, daß die PS3 eine Mördermaschine wird, die die Konkurrenz ganz deutlich auf ie Plätze verweisen kann.

Das wird wohl nicht einfach für M$ - und das freut mich dann sehr! Hoffen wir, daß es so kommt!
 
nun warten wir mal die e3 ab, dann wissen wir was für ne grafik alle drei konsolen hinzaubern so einfach ist das.

wers hier einfach oder schwer haben wird hängt wohl nicht alleine von der performance ab, ich halte den release termin, den preis, die werbung und die launch spiele für ausschlaggebendere punkte auf den massenmarkt.
 
mia.max schrieb:
nun warten wir mal die e3 ab, dann wissen wir was für ne grafik alle drei konsolen hinzaubern so einfach ist das.

.

Nein, so einfach wird das nicht! Zur E3 wird man schon Einiges, aber nicht alles wissen. Denn da werden ja alle Geräte noch aqm Anfang stehen. Und da SONY mit Tech-Demos eher eine zurückhaltende Politik fährt und M$ eher das Gegenteil könnten sich die Eindrücke natürlich erstmal gleichen.

Ich würde sagen, daß trotz der sich immer mehr abzeichnenden deutlichen technischen Überlegenheit der PS3 wirkliche Unterschiede erst in der zweiten Spiele-Generation erblicken lassen werden - von der 4. oder 5. Generation ganz zu schweigen.
 
Terminator schrieb:
Nein, so einfach wird das nicht! Zur E3 wird man schon Einiges, aber nicht alles wissen. Denn da werden ja alle Geräte noch aqm Anfang stehen. Und da SONY mit Tech-Demos eher eine zurückhaltende Politik fährt und M$ eher das Gegenteil könnten sich die Eindrücke natürlich erstmal gleichen.

Ich würde sagen, daß trotz der sich immer mehr abzeichnenden deutlichen technischen Überlegenheit der PS3 wirkliche Unterschiede erst in der zweiten Spiele-Generation erblicken lassen werden - von der 4. oder 5. Generation ganz zu schweigen.
nun, das die meisten spiele erst in der 2ten und 3ten generation ihre volle pracht entfalten is wohl auf jedem system so.

ehrlichgesagt kann ich mir nicht vorstellen, dass sony sich mit halbgaren techdemos zufriedengibt.
wenn sony mit der ps3 die möglichkeit hat etwas zu zeigen, welches sehr viel stärker als das von der xbox2 aussieht, dann werden sie das auch tun, alles andere wäre nicht gerade klug.

genau diesen vorteil sollte und wird hoffentlich genutzt werden, wenn die ps3 schon später kommt... evtl. sogar viel später.

alles andere wäre ein geschenk an ms.
 
Sony wird mehr als nur Techdemos zeigen. Sony hat genug Respekt vor Microsoft diese E3 nicht im Schatten der Konkurrenz stehen zu wollen. Und wenn man dabei genug Hype um die eigene Konsole erzeugt, dann hat man seinen Soll erfüllt...
 
Sony sagt nichts. Aber ab dem 16. Mai werden sie dafür sorgen, dass vor allem eine Konsole im Interesse der Videospielwelt steht..., und das ist PlayStation 3.
 
Jack schrieb:
Zumal SOnys schweigepolititk eher als Ruhe vor dem Sturm zu werten sein sollte...
Also Schweigepolitik würde ich die Strategie Sonys bezüglich der PS3 nicht unbedingt nennen. Diese Strategie ist momentan eher Nintendo vorbehalten. Sony dagegen liefert häppchenweise Informationen. Insbesondere der CELL steht da im Mittelpunkt. Von dem weiß man immerhin, dass er verdammt schnell ist. ;)
Zur GPU:
NVidia wird definitiv eine ordentliche GPU hinbekommen. Ich glaube aber nicht, dass sich die GPUs aller drei Konsolen großartig voneinander unterscheiden werden. Immerhin liegen ATi und NVidia im PC-Bereich so ziemlich gleichauf. Ich bin dennoch auf die genauen Daten gespannt.
 
Es kommt immer auch auf den Launchtermin an. Muß nVidia eine GPU machen, die schon bald in Massenfertigung geht á la ATI und Xbox360 oder haben sie damit noch mehr Zeit? Ein halbes Jahr kann da technisch schon wieder einiges ausmachen.
 
frames60 schrieb:
Es kommt immer auch auf den Launchtermin an. Muß nVidia eine GPU machen, die schon bald in Massenfertigung geht á la ATI und Xbox360 oder haben sie damit noch mehr Zeit? Ein halbes Jahr kann da technisch schon wieder einiges ausmachen.
wobei ein halbes jahr natürlich recht viel wäre... das würde dann schon langsam in die richtung launch ein jahr nach xbox360 zeigen.
und das kann sich sony imo nicht erlauben...

ms ein jahr vorsprung zu geben wäre fatal.
 
frames60 schrieb:
Es kommt immer auch auf den Launchtermin an. Muß nVidia eine GPU machen, die schon bald in Massenfertigung geht á la ATI und Xbox360 oder haben sie damit noch mehr Zeit? Ein halbes Jahr kann da technisch schon wieder einiges ausmachen.
Ich glaube nicht, dass ein halbes Jahr SO viel ausmachen wird. Aber Du könntest auch Recht haben. Wir werden sehen.
 
Ich denke, dass kommt hin. Ich bezweifle, dass Sony noch dieses Jahr launched. Ich meine, sie haben genug mit der Produktion der neuen PS2 und der PSP zu tun, ich sehe da im Moment auch noch keine großartigen neuen Kapazitäten für die Massenfertigung der PlayStation 3. Oder aber die Verschiebung der PSP in Europa auf September mit einer riesen Zahl an Launcheinheiten ist eben ein Zeichen dafür, dass sie teilweise Kapazitäten für PS3 freimachen/brauchen. Immerhin, Sony stellt ja fast alles selber her (auch die GPU kommt nicht aus nVidia-Werken, sondern wird von Sony gefertigt).
Ich persönlich denke, wenn Sony es schlau anstellt, können sie ein halbes Jahr Unterschied gegenüber Xbox360 locker noch mit der PS2 "überbrücken" und die Welt auf die PlayStation 3 heiß machen. Und wenn MS es dann nicht schaffen sollte, da deutlich aus dem Schatten zu treten, könnten sie die neue Xbox schon schlagen, bevor die PS3 überhaupt auf dem Markt ist.
Das kommt eben drauf an, was sie nun auf der E³ zeigen. Sollte die PS3 da deutlich dominieren und bessere Grafik bieten, würde MS auch ein früherer Launch IMHO nix nützen.
 
Status
Nicht offen für weitere Antworten.
Zurück
Top Bottom