Search results for 'Business' - Page: 12
| Stuff.co.nz - 5 Jun (Stuff.co.nz)David Swanson says he wouldn’t have had the “confidence” or “connections” to start his own business without the immersive 54-hour event. Read...Newslink ©2024 to Stuff.co.nz | |
| | | Stuff.co.nz - 5 Jun (Stuff.co.nz)Peter Van De Wiel, 79, died in hospital, homeless and broke, three months after being evicted from the Parnell City Lodge, which he owned and managed. Read...Newslink ©2024 to Stuff.co.nz | |
| | | PC World - 4 Jun (PC World)Intel launches Lunar Lake, its next entrant into its Core Ultra series of laptop processors, today at Computex, ushering in a new generation of AI-infused Copilot+ PCs that have been initially overshadowed by Qualcomm.
Stop us if you’ve heard this before: Intel is prioritizing low power, perhaps feeling the pressure from Qualcomm’s just-launched Snapdragon X Elite. Several tweaks to Lunar Lake’s design, however, resulted in power savings and performance boosts, including shifting all of the E-cores to a low-power architecture. The Xe2 GPU at the heart of Intel’s “Battlemage” is here. Oh, and hyperthreading? Gone.
But there’s a fairly major change that affects you, a potential laptop buyer: Intel is embedding the DRAM onto the chip package. Yes, the PC’s memory. For now, if you buy a Lunar Lake laptop, you’ll have a choice between 16GB and 32GB of DRAM, but with no option to upgrade it later.
We’re diving deep into Lunar Lake in this story, so feel free to jump ahead to the section you’re interested in. We’d expect Intel to eventually market Lunar Lake as the Intel Core Ultra Series 2, the unofficial 15th-gen Core chip.
Intel’s Lunar Lake chip.
Intel’s Lunar Lake chip.Intel
Intel’s Lunar Lake chip.Intel
Intel
Lunar Lake: Made in Taiwan?
First, let’s be clear: Though Intel announced Lunar Lake at Computex, this isn’t a product yet. Intel is working with early production steppings, but Lunar Lake (and presumably laptops) won’t ship until sometime in the third quarter.
IFA, the Berlin trade show that begins Sept. 6, is the projected launch venue, sources at notebooks vendors say. Arrow Lake, the next iteration of Intel’s desktop processors (and possibly mobile chips for gaming laptops), are also due this year and could launch around IFA, too.
Intel
Intel
Intel
While Intel’s Meteor Lake was a relatively complex chip with multiple tiles, Lunar Lake is a simpler design. While there are four tiles, only two matter: there is a compute tile (fabricated on TSMC’s 3nm-class N3B process) and the platform controller tile (on TSMC N6, an older 7 nm process). There is also a “filler” tile, a structural “blank” piece of silicon that’s just there to fill out the remainder of the chip and avoid it bending. It’s all mounted over a passive interposer, the “base” tile, which provides interconnections between the chips.
That’s a significant change: Intel had always targeted Lunar Lake as the first of the “angstrom” generation, fabricated on its 18A process. Meteor Lake was the first time that Intel mixed and matched tiles from its own fabs as well as TSMC. The key there, though, was that the compute tile was manufactured on Intel’s Intel 4 process, as it originally promised. With Lunar Lake, only the base tile is manufactured at Intel, according to executives, though Intel handles the assembly.
“You’ve probably heard my boss Pat [Gelsinger, Intel’s CEO] talk a little bit about 18A and we’re on track to fully utilize this process,” said Michelle Johnston Holthaus, executive vice president and general manager of the Client Computing Group at Intel. “We’re going to market on B0 silicon and we’re on track to be in production in [the third quarter] of this year.”
Following Apple: On-package memory
When you buy a laptop, a PC maker will install memory: sometimes soldered on, sometimes with slots that allow more memory to be added in the future. Now, Lunar Lake puts that memory within the chip package itself.
Apple has most recently been known for adding on-package memory with its M3-based Macs (with up to 128GB of unified memory) and the M4-based iPad, which follows suit. Now Intel is joining the crowd. Lunar Lake will mount 16GB and 32GB of LPDDR5X memory (with up to 8.5 gigatransfers per chip in two ranks), saving up to 250 sq. mm on the motherboard.
“I said, how do we build the best thin-and-light PC, and memory on package with our customers was by far the desired first step,” said Jim Johnson, senior vice president of the Client Computing Group and general manager of the Client Business Group at Intel, in an interview.
Intel
Intel
Intel
“The technical part is that we want to have an exquisite notebook that will take on ecosystem competitors,” Johnson added. “And that’s what we built. And we think 16[GB] and 32[GB] is the right matchup and yes, it’s not upgradable beyond that, but this is the cornerstone of our architecture moving forward and we will offer those options in the future.”
If you don’t like the idea of not being able to upgrade your memory, or if you want more memory configurations, it sounds like they might be coming. “I would just say that the next turn of the roadmaps are going to offer more traditional options,” Johnson said, which other Intel executives said referred to Lunar Lake’s successor, Panther Lake.
Low-power DDR DRAM needs to be soldered as close to the CPU as possible, so Intel’s decision makes sense — if weren’t for the recent introduction of LPCAMM2, an upgradable module which actually allows you to replace the memory, too.
Lunar Lake e-core are all low power now
Intel’s Lunar Lake makes two major changes to the CPU designs that you’re familiar with. First, what’s known as the “Skymont” efficiency core no longer has the low-power E-core that its predecessor, Meteor Lake, shipped with — all of the Skymont E-cores are essentially low-power E-cores, period.
But there’s a bigger twist: hyperthreading has been completely disabled across the board. All cores simply have a single thread associated with them for performance reasons. Even the performance cores, known as “Lion Cove,” are single-threaded. More on that later.
Intel’s Skymont E-cores offer substantive performance and power gains over Meteor Lake, Intel says.
Intel’s Skymont E-cores offer substantive performance and power gains over Meteor Lake, Intel says.Intel
Intel’s Skymont E-cores offer substantive performance and power gains over Meteor Lake, Intel says.Intel
Intel
Lunar Lake has four E-cores and four P-cores. Stephen Robinson, an Intel fellow and the lead architect for the new Skymont E-core, explained that at least for this generation, the E-cores should be thought of as a “brick,” which implies that Lunar Lake products will have blocks of four E-cores each — so a Lunar Lake chip with six E-cores sounds highly unlikely.
Lunar Lake’s E-core has a number of substantial architectural enhancements — wider machine decoding and out-of-order engines, a 4MB level-2 cache shared among all four cores — but the improved performance is startling.
Lunar Lake’s E-cores make the now-familiar tradeoff: they can either be run at lower power or at substantially higher performance for the same power. Here, the low-power cores can either be run at one-third the power of Meteor Lake’s E-cores, or else offer a substantial 1.7X performance improvement.
Intel is even claiming that its E-cores outperform the 13th-gen Core’s performance CPU, Raptor Cove.
Intel is even claiming that its E-cores outperform the 13th-gen Core’s performance CPU, Raptor Cove.Intel
Intel is even claiming that its E-cores outperform the 13th-gen Core’s performance CPU, Raptor Cove.Intel
Intel
At peak load, Lunar Lake’s E-core performance is basically double that of Meteor Lake, Robinson said. In multithreaded performance (where the four E-cores in Lunar Lake double the two low-power E-cores in Meteor Lake) multithreaded performance reaches 2.9X or 4X at peak clock speeds.
If put in a desktop compute tile, the Skymont E-cores actually outperform Raptor Cove, the 13th-gen Core CPU tile by about 2 percent in both fixed-point and floating-point operations, with some variation. Lunar Lake is not a desktop architecture. Instead, that’s a tip that may point to how the next-gen Intel desktop chip, Arrow Lake, performs.
Intel is not saying how fast that Lunar Lake will be clocked, unfortunately. For now, it’s just talking about the design of the chip itself.
Intel Thread Director gives Windows more control
Intel’s Thread Director has thankfully been simplified within Lunar Lake, too. Thread Director interacts with the Windows operating system, determining where and when to send tasks on to which cores. On Lunar Lake, it’s simple: tasks are assigned to the E-cores first. If they’re full or the workload exceeds their capabilities, then they’re routed to the P-cores.
As you might expect, there is a wrinkle: the creation of “OS containment zones.” Users have been asking for years for controls to specify playing a game, for example, on all of the chip’s P-cores. It’s not quite clear whether users will be granted this sort of specificity, but the OS will. For example, Microsoft Teams has been granted an OS containment zone so that the app will run only on the E-cores, and won’t touch a P-core, according to a presentation by Rajshree Chabukswar, an Intel fellow.
As a result, Teams power was cut by 35 percent, Chabukswar said.
Lunar Lake’s P-cores kill hyperthreading
The performance core within Lunar Lake, Lion Cove, is 14 percent faster than the P-core within Meteor Lake, known as Redwood Cove. And that’s with a huge change: Intel has turned off hyperthreading across Lunar Lake. Yes, hyperthreading, the SMT technology that’s been a staple of Intel’s chips for about twenty years.
Intel is making the case that hyperthreading is just too expensive in terms of power and cost.
Intel is making the case that hyperthreading is just too expensive in terms of power and cost.Intel
Intel is making the case that hyperthreading is just too expensive in terms of power and cost.Intel
Intel
So why get rid of hyperthreading? According to Ori Lempel, the senior principal engineer of Intel’s P-Core, Intel’s goals were to optimize single-threaded performance, with an eye toward maximizing the performance per watt per area on the chip — low performance per watt costs battery life, and low performance per area essentially costs Intel money in manufacturing costs.
Hyperthreading does make sense for performance parts and datacenters, Lempel noted. But it requires physical space for the hyperthreading logic and the associated silicon. But in thin-and-light laptops, the target for Lunar Lake, Intel engineers discovered that they achieved 15 percent more performance per watt and 10 percent more performance per area with hyperthreading turned off than a hyperthreading-enabled processor.
Intel’s Lion Cove, and its relative performance.
Intel’s Lion Cove, and its relative performance.Intel
Intel’s Lion Cove, and its relative performance.Intel
Intel
There are two other key changes in the P-Core. First, if a Lunar Lake needs to add or subtract performance, it will do so more gradually. Intel processors currently increase and decrease in 100MHz increments; Lunar Lake will step up and step down at 16.67MHz intervals. Second, Intel has added a small “AI” controller, which will monitor and watch the system in real time. The idea is that Lunar Lake systems will make small, incremental adjustments to power and speed, maximizing performance and battery life for users.
From a security standpoint, Intel has added a “partner security engine” to the Intel silicon security engine and the Intel graphics security controller. That partner security engine is Pluton, the Microsoft-AMD security engine that has successfully protected the Xbox.
It’s time for Xe2 to debut
Intel has steadily increased the performance of its integrated GPU in successive generations, but Lunar Lake marks a sharp leap: this is the debut of the Xe2 graphics architecture. Tom Petersen, an Intel fellow, confirmed that Xe2 is inside Lunar Lake, and this is the same architecture that will debut later in a discrete GPU for desktops, code-named “Battlemage.”
intel’s Xe2 architecture: Lunar Lake on the left, Battlemage on the right.
intel’s Xe2 architecture: Lunar Lake on the left, Battlemage on the right.Intel
intel’s Xe2 architecture: Lunar Lake on the left, Battlemage on the right.Intel
Intel
Again, Intel isn’t talking specifics, including Xe2’s clock speeds, memory, or details of the Lunar Lake implementation. But Intel provided a more general overview of how Lunar Lake’s Xe2 implementation compares to the integrated GPU within Meteor Lake.
Petersen described the Xe2 architecture as “more compatible with games and with a higher utilization.”
Intel isn’t providing actual performance numbers yet, but it providing some comparisons to the first-gen architecture.
Intel isn’t providing actual performance numbers yet, but it providing some comparisons to the first-gen architecture.Intel
Intel isn’t providing actual performance numbers yet, but it providing some comparisons to the first-gen architecture.Intel
Intel
Intel’s Xe2 core has been redesigned, with eight 512-bit vector engines accompanied by eight 2048-bit Xe Matrix Extension (XMX) engines capable of 2,048 FP16 operations per clock and 4,096 8-bit integer operations per clock — both tools that can be used for traditional graphics as well as AI. There’s an improved ray tracing unit, too.
In Lunar Lake, Intel has set up the GPU to offer eight Xe cores, with 64 vector engines and two geometry pipelines. All told, Intel believes it will offer 1.5X the performance of the previous generation, at the same power.
Here’s how Intel’s Xe2 will be configured within Lunar Lake.
Here’s how Intel’s Xe2 will be configured within Lunar Lake.Intel
Here’s how Intel’s Xe2 will be configured within Lunar Lake.Intel
Intel
“I don’t think I’m allowed to tell you the performance at higher power,” Petersen added.
The Lunar Lake display engine will offer 3 display pipes, with HDMI 2.1 (up to 8K60 HDR 10-bit), DisplayPort 2.1 (three 4K60 displays) and a new eDP 1.5 connection, which will allow for 360Hz 1440p displays for gaming.
Intel also has a technology called “panel replay,” which is an evolution of how the display panel can self-refresh. Adaptive sync displays adjust the panel’s frame rate to match the content coming in, eliminating judder or screen tearing. Panel replay does something similar. The example shown was a movie, where the panel has to self-adjust its timing to account for the 24fps movies are broadcast in, as opposed to the native 60Hz (or higher) of the panel.
What panel replay does is understand that certain frames may need to be repeated. If this happens, though, the display engine can turn off the CPU cores and in some cases the memory when they aren’t needed. The GPU just queues the needed frames in place.
There’s also something new in the video codec front. While Lunar Lake performs coding and decoding of the AV1 video codec, it has added decoding support for VVC (H.266), an advanced video codec. AV1 shrinks file size by about 40 percent compared to the older HEVC file format, and VVC file sizes will be about 90 percent of a AV1 file, Petersen said. However, VVC’s file complexity is substantially more.
Lunar Lake’s NPU: It’s finally time for Copilot
Naturally, a key focus for Lunar Lake is AI, which features a significantly improved “NPU 4” core.
We live at a weird intersection of AI capabilities, which Lunar Lake lands in. Most people have only used AI in the cloud, through Windows Copilot, Google’s AI Overviews, ChatGPT, or some other service. Chipmakers would love for you to use local AI, and Copilot+ PCs with native AI capabilities will start shipping later this month — but only initially with Qualcomm’s Snapdragon X Elite chips inside.
Intel is making the case that whatever the platform — CPU, NPU, or GPU — it can deliver.
Intel is making the case that whatever the platform — CPU, NPU, or GPU — it can deliver.Intel
Intel is making the case that whatever the platform — CPU, NPU, or GPU — it can deliver.Intel
Intel
Customers who bought into Intel’s initial vision of an AI PC may feel a little jilted; current Meteor Lake laptops only generated 11.5 TOPS from the NPU, significantly under the 40 TOPS that Microsoft’s Copilot+ program requires. The new “NPU 4” inside Lunar Lake produces 48 TOPS all by itself. That means Lunar Lake PCs will be Copilot+ capable, when they ship. Meteor Lake AI PCs are not.
Further reading: Microsoft’s Copilot+ PC push leaves existing ‘AI PCs’ behind
What’s new? Meteor Lake had a pair of inference pipelines in the NPU. Lunar Lake has six, each of which triples the amount of multiply-accumulate (MAC) engines that are fundamental to AI processing. That basically works out to double the performance in the same power envelope. AI processing is essentially a ton of specific matrix and vector mathematics, and Intel has begun adding in specialized blocks. What it calls the SHAVE DSP is one vector engine, which provides 12 times the vector performance. Basically, Intel is saying that SHAVE will boost the performance of LLMs, or AI chatbots, running locally on your PC.
Intel believes that Lunar Lake offers a potent combination of AI capabilities, with 120 TOPS spread over the CPU (5 TOPS), GPU (67 TOPS), and NPU (48 TOPS). But that unfortunately ignores the broader point: most applications pick one chip, and don’t use all three at once.
Not all, though. In a demo, Intel showed how running 20 iterations of Stable Diffusion could be achieved in about a quarter of the time of Meteor Lake, and at lower power, too, using the NPU and GPU in concert.
Intel NPU4 on Lunar Lake in action.,
Intel NPU4 on Lunar Lake in action.,Intel
Intel NPU4 on Lunar Lake in action.,Intel
Intel
Lunar Lake’s communications technology: using Wi-Fi as a sensor and more
Surprisingly, Lunar Lake will not be the debut platform of Thunderbolt 5, as you might have expected. But it will integrate Wi-Fi 7 and Bluetooth 5.4, and provide an enhanced multi-link single-radio (eMLSR) technology that should improve throughput by hopping back and forth between wireless channels. And there’s a wild new technology, called Wi-Fi Sensing, that uses a Wi-Fi radio as essentially a type of radar.
According to Carlos Cordeiro, an Intel fellow the wireless CTO of Intel’s Client Computing Group, Intel is strongly encouraging laptop makers to cluster all of the Thunderbolt ports on one side of a laptop, stop mixing and matching Thunderbolt and USB-C ports, and properly label all Thunderbolt ports — all things that should have happened long ago. (Lunar Lake will also support three Thunderbolt ports, up from two, and the Thunderbolt Share sneakernet will be featured.) Cordeiro indicated that Thunderbolt 5 will be in Intel silicon later this year, which likely means Arrow Lake.
Interestingly, you will see higher throughput with Thunderbolt 5. Thunderbolt 5 SSDs will actually deliver 25 percent more performance on a Lunar Lake PC with a Thunderbolt 4 port, Cordeiro said.
Wi-Fi 7 was in Meteor Lake, too, but now it’s been more fully integrated, saving power. Intel built in a small 11Gbps interface between the Lunar Lake platform controller tile as well as the wireless, future-proofing the connection.
Though the Intel WiFi radio can talk on the three bands — 2.4GHz, 5GHz, and 6GHz — those bands can still become congested, slowing data throughput. Intel built a technology called enhanced multi-link single operation to solve that problem. Essentially, eMLSO concentrates on a single frequency, but periodically listens to others, especially if the frequency becomes congested. The technology will then shift the radio’s communication over to the uncongested frequency.
And did you know that DDR memory itself can cause Wi-Fi interference? Intel uses a technology called RF Interference Mitigation to dynamically adjust the clock frequency of the memory to prevent interference.
Intel can adjust the frequency of its DDR memory to avoid interference with your laptop’s WiFi radio.
Intel can adjust the frequency of its DDR memory to avoid interference with your laptop’s WiFi radio.Intel
Intel can adjust the frequency of its DDR memory to avoid interference with your laptop’s WiFi radio.Intel
Intel
WiFi Sensing uses both antennas, one broadcasting and one receiving. The laptop essentially broadcasts radio data out, then uses the other antenna to “listen” for a bounce off various objects — specifically you. If the WiFi Sensing technology detects you’re walking away, it locks your computer and shuts off the display. If you then approach, it wakes the displays (but doesn’t unlock the computer.)
“You can be a kid, a big person — that’s the other type of magic,” Cordeiro said. “We can retrain the model so that we know the size of the person that’s approaching.”
It’s a little scary! Intel has bigger plans for Wi-Fi Sensing, though it’s unclear whether they’ll come to market. “Future PCs will be able to detect user movements and gestures, monitor heartbeat and breathing rate, whether accessories are to the left or right, how many there are, etc.,” Intel said.
Intel’s Unison is getting beefed up, too, with tablet control, a quick connect to phones that don’t have access to Unison, and a universal hotspot. The latter functionality is already in Windows, so it’s unclear what Unison will deliver.
Intel
Intel
Intel
Finally, Lunar Lake can run Bluetooth over PCIe, which Cordeiro said will save time accessing the Bluetooth device.
In all, Lunar Lake is yet another substantive rewriting of the mobile PC processor. But with Qualcomm’s Snapdragon X Elite and AMD’s Ryzen AI 300 waiting in the wings, can it maintain its traditional laptop leadership? We’ll see.
CPUs and Processors, Laptops Read...Newslink ©2024 to PC World | |
| | | Stuff.co.nz - 4 Jun (Stuff.co.nz)The Finance Minister and Health Minister are at Waikanae Health Centre for an announcement about radiology. Read...Newslink ©2024 to Stuff.co.nz | |
| | | PC World - 4 Jun (PC World)As Qualcomm-powered Windows on Arm PCs begin appearing here at Computex, ushering in a generation of AI-infused Copilot+ laptops, it seemed appropriate to interview a major player in the push.
No, not Qualcomm. (We’ve already spoken to them.) Instead, I mean Arm, the semiconductor design company that licenses CPUs to companies like Qualcomm, Apple, and Samsung. Arm dominates in smartphones and tablets, and now, true PC contention finally seems possible.
I sat down with chief executive Rene Haas in Taipei, touching upon everything from NPUs, to how Arm solved its Windows app gap, to why Intel, AMD, and Qualcomm don’t matter to the success of Windows on Arm PCs. And he has nothing but praise for Apple’s M-series Macs, which he says “woke up the industry on the art of the possible” with Arm laptops. “I think Apple silicon has really proven that you could build a first-class laptop and have no compromises,” Haas said.
This interview has been slightly edited for length and clarity.
Arm chip and AI discussion
Mark Hachman, PCWorld: Since AI is the big thing now, my first question is basically an AI prompt. Please explain what the your recent CSS for Client processor means to a PC-centric audience.
Rene Haas, Arm: The way I might describe it is if you think about the chip that goes inside your PC, and we have CSS today for mobile phones — we aren’t announcing CSS for PCs. The way to think about it would be just a chip that’s inside your laptop that’s running all of the application software, the display or the GPU. Even the NPU that all gets designed by different blocks of separate pieces of intellectual property.
So what we ended up doing with a CSS is we take everything that’s around the computer, the CPU, the GPU, the NPU, and all of the mesh network, the interrupt controllers, and we put that all together as a finished block, and deliver that to the person who’s building the system on a chip, and then they are able to get that shipped to market much faster. An analogy for the PCWorld audience: if you think about IP as individual Lego blocks and compute subsystems as the Lego blocks that allow you to build a Statue of Liberty. That’s kind of what we do.
Mark Hachman / IDG
Mark Hachman / IDG
Mark Hachman / IDG
In the past Arm would simply license cores to companies like Qualcomm, Apple, Samsung and others. Is this an expansion of that?
It’s more that we would sell cores, but instead of PC where you might have eight big cores and four small cores, we would deliver that configuration. So what is the right mix and match of CPU cores to maximize performance? And then from implementing an actual system on chip we’ll take it all the way, with all the libraries you’d need for, say, three nanometer.
And with that we can literally assure you that you’re going to get, call it four-year performance. Everything is tuned. And the reason we do it is not only to save time to market, but we can almost assure that, built this way and configured this way, you’re going to get the maximum performance and power savings.
You mentioned an NPU before. But you don’t build an NPU, at least not in the CSS architecture. Did I miss something?
There isn’t an NPU today on the PC side. We have NPUs today on what I would call the entry-embedded line. But yeah, we haven’t gone public with our NPUs for the high end.
In your CSS announcement, you mentioned KleidiAI, which provides AI functions but for the CPU. Can you explain that to a PC audience that is just starting to understand what an NPU is?
Right. It’s a great question because the way that the software takes advantage of the NPU [today] is fairly high level. In other words, if the NPU has multiply-accumulates [a type of math] to run off a machine learning algorithm, the software will take advantage of that NPU. It will go off and run these complex instructions there.
What it’s not able to do is take advantage of anything unique in the NPU that might have been down to the metal layer or the hardware specific layer, because there’s no way for the application to know what’s inside that underlying hardware. That, by the way, is one of the disadvantages of everyone having their own NPU; the software ends up doing the least common denominator approach, but just making the most simple assumption about what’s there.
What KleidiAI does on the CPU is…. well, inside the CPU are very, very specific instructions that will accelerate performance. In the case of Arm V9, these are things like what we call SVE or SME, Scalable Vector Extensions, Scalable Matrix Extensions, these are things that can really, really speed up an AI algorithm.
But again, the software developer doesn’t really know whether the processor has SME. He maybe doesn’t know anything about it. So if you call that runtime library, the library is going to know — oh, this is what’s there. And I’m going to take advantage of it. So it allows for significant speedup of the performance of the software, without the developer having to know. That is really one of the probably the superpowers of those of those underlying libraries.
You may have just done this, but describe for me again what the advantage would be in the real world.
The way to think about it is an analogy — it’s not a PC analogy, but it’s a fun analogy, and it applies. When [Google] Gemini came out for Android phones, Samsung’s Galaxy had two chips underneath the hood. They had a Qualcomm chip and a Samsung Exynos chip. Both of those had NPUs, but they were a little bit different.
So from a application standpoint, Gemini was only able to run at a general level, and it wasn’t really optimized for the hardware. So fast forward, what’s going to happen with these agents, whether it’s Copilot or Gemini, they’re going to be part of the operating system. And if they’re part of the operating system, they really want to understand as much of the specifics of the low level hardware so they can be performant. But as long as these things are a little bit different, you’re not gonna be able to manage that.
So we think the library approach is not only in the right place for CPUs, but I think over time, that’s what happens with these entities.
And you’re going to have an NPU for these high-end processors. You just haven’t come to market yet.
You can extrapolate that.
That’s why when you see people benchmarking 50 TOPS versus 40 TOPS, it’s a benchmark.
And TOPS isn’t supposed to be a great benchmark or definition.
TOPS isn’t a great definition; it’s a crude benchmark of tera-operations per second. But what’s more important is, is the software able to take advantage of the hardware? And that’s the story of our libraries.
Snapdragon PCs and Windows on Arm
Arm’s CSS has a “traditional” mix of extreme cores, performance cores, and efficiency cores. But Qualcomm’s Snapdragon X Elite just ditches all that for all performance cores. That’s it. OK, so what does that mean?
Qualcomm’s doing this mix and match but what we what we see is the direction of travel is such that the complexity of software is so high.
Respectfully, we work a lot more closely with the operating system vendors and the application community than we do with folks building the chip. When it comes to the software, the decisions in terms of what Microsoft or Google is going to put in the operating systems are made years in advance. Usually before the chip. Vendors decide which core to put inside it.
So the reason we think the CSS approach is going to be right over time is that it is going to allow us to work very closely with the application ecosystem, the developer ecosystem, the operating system vendors, to really ensure that we are delivering not only the most software-optimized platform, but because the time to develop these chips keeps getting longer, the manufacturing cycle times to build them keeps getting longer, that this idea of I’m gonna selectively pick all the bits and then figure out how to mix and match, no. People run out of time. And that’s why we’ve seen the CSS approach been so, so compelling for folks, it just saves a ton of time.
This is a good discussion. Let’s stick with the software. One of the historical problems with Windows on Arm is the software: it’s run slowly, with compatibility issues. Tell me how you’re helping to solve those problems.
We had a lot of benefit from the mobile ecosystem in really, really driving native applications. So just think about all the apps you run on your mobile phone: Adobe, Spotify, browsers, they now have all been natively ported to ARM and that’s the monster benefit.
When you go back go back to the Windows platform, you think about performance, because performance is really a function of both software compatibility but also software optimizations. So there are two bins.
One is the apps are there. On Windows on Arm [in previous years] there were holes. The apps were simply not there. In this day and age, that’s table stakes. If you don’t have it, it’s a big, big deal. That has literally all gone away.
And now all the the apps are tuned. And to the question, well, how did that happen? It’s a combination of working very closely with Microsoft.
When Microsoft talks about their Prism interpreter for Windows on Arm, how does that intersect with your own efforts?
We work incredibly closely with Microsoft.
So how does this intersect with say, Qualcomm, who has a big stake in making Windows on Arm work. Do you sit down with them? Does one side take direction from the other?
I think when it when it comes to the compute platform, it’s more Arm with Microsoft, than it is arm with Qualcomm, if that makes sense. Android is a good example.
If I think about Android, we work very closely with Samsung who builds chips, Qualcomm who builds chips, MediaTek who build chips, but we’re closer to Google. And it’s not anything negative about the chip guys. But the compute platform is between Arm and the operating system, between Arm and the application developers. It’s not really the chip guys.
Is that just because a rising tide lifts all boats, and it makes more sense to work for the benefit of all of the chip vendors, and not just one?
If you go back to the first Android smartphone, the number one vendor developing the app processor was [Texas Industries] OMAP. And one of the other very large guys doing chips back then was Broadcom.
Fast forward to today, it’s MediaTek and Qualcomm. And if you go back to the handset vendors back in 2009, there was HTC and LG, Ericsson, Nokia. And you look at it and say well, Nokia is gone. Ericsson is gone. LG is gone. HTC is gone. Ti is gone. Broadcom’s gone. Yet, Arm is still dominant in Android. Why is that? Well, it’s because we create the environment. And you create the opportunity for different handset manufacturers and different chip people to enter the market.
I like this conversation and I want to continue it, but a rumor springs to mind: that there has been an exclusivity agreement in place between Microsoft and Qualcomm for Windows on Arm, and that agreement ended this year. Is this true?
Everything I’ve heard is that that is true. Okay. I’ve heard that rumor, exactly. I have also heard that times out this year. And that will allow other players to enter the market, which again, all the rumors I’ve heard, that’s true. And I think it’s going to be great because it’s going to allow for choice and it’s going to allow for diversity, which is kind of the theme of the Windows ecosystem.
How do you view the potential for Windows on Arm for Arm, versus something like, automotive?
Gosh, without getting into the [details]… for Windows on Arm, it’s a pretty significant revenue opportunity. Because 200 million units, and our market share is approximately zero.
Well, maybe not zero. But smallish.
Yeah, so I think there’s only upside for us there. And if I look at the other ecosystem for PCs, and I look at what Mac OS has done, at the silicon, it has been amazing. And the products are amazing.
Do you think Apple helped validate your approach, by making its transition from X86 to Arm?
They were, they were a great help. They were great. Apple’s a fantastic partner. And I think Apple silicon has really proven that you could build a first-class laptop and have no compromises.
We’re learning that Qualcomm has promised monthly drivers for Snapdragon X Elite PCs. That’s their commitment. Do you help out here?
Were they referring to GPU drivers?
I believe they were referring generically to monthly driver updates.
If it’s an OS driver, that’s actually Arm and Windows. So when you get that annoying. security patch — “Windows needs to update your machine” — that’s Arm and Windows, meaning Microsoft. Qualcomm is not involved in anything relative to the OS first.
From where you’re sitting, is the Windows on Arm community providing the right messaging to consumers and potential buyers?
Again, respectfully, I think the world has kind of moved on relative to Intel, AMD, Qualcomm inside and there’s probably less of a buying decision for folks any more.
I think the AI PC is good liftoff because it’s obvious with what Microsoft’s doing with Copilot and what runs locally and what runs at the cloud. And it’s obvious that AI is creating all kinds of differentiated use cases.
Let me say it this way. If there was no AI, I think it might have been a little harder for Microsoft in general to create buzz around this new category. I think AI PC gives a great kind of tailwind. And I think on top of that, that it creates the window for new machines.
I think the Windows on Arm machines are going to be when people say oh, I need it. It starts with AI PC. A new opportunity. Now let me see, what are the options I have with AI PC? Oh my gosh, these machines here look pretty good. The battery life is great. The thermals are great and mechanicals are great. I think it’s less about oh my gosh, what brand is it, Intel? AMD? No.
So let’s say Windows on Arm is a resounding success. What does that mean for future development of Arm processors? Or does CSS for Client anticipate that success?
Yes.
It’ll be good for us.
OK, final question. I know I’m probably going to get a biased answer, but did you expect what I’d characterize as a warm reception for Windows on Arm this time around, versus before?
I did. I did. I think it was time. And again, I would give thanks to the folks in in Cupertino [Apple]. I think they I think they woke up the industry on the art of the possible in terms of what can be done with an Arm-based PC.
I think a lot of things come into play relative to the right time. Microsoft making the investment. So maybe the timing is right. I mean, as I mentioned in the keynote, I was personally involved in the very first Windows on Arm PC. I was at Nvidia at the time; I was the GM [general manager] that was running that business.
I lived Surface RT. It had a kludgey version of [Microsoft] Office. It had no enterprise support whatsoever. If you were a CIO, an IT manager there was no way to do anything with it. All that’s gone.
Further reading: Surface VP sitdown: How is AI going to change Microsoft’s PCs?
CPUs and Processors Read...Newslink ©2024 to PC World | |
| | | RadioNZ - 4 Jun (RadioNZ)Analysis: Aside from the traditional bilateral meetings with leaders, the Prime Minister`s trip will have a major focus on business. Read...Newslink ©2024 to RadioNZ | |
| | | BBCWorld - 3 Jun (BBCWorld)Shein has been linked to unethical business practices, including forced labour allegations. Read...Newslink ©2024 to BBCWorld | |
| | | BBCWorld - 2 Jun (BBCWorld)Evan Davis on common features shared by business leaders with a flourishing enterprise to their names. Read...Newslink ©2024 to BBCWorld | |
| | | RadioNZ - 2 Jun (RadioNZ)A controversial new road design in Palmerston North has caused one business to move, but a regular cyclist says the changes could avoid potentially fatal crashes. Read...Newslink ©2024 to RadioNZ | |
| | | Sydney Morning Herald - 1 Jun (Sydney Morning Herald)Rugby Australia officials are adamant it was a pragmatic business decision to kill the Melbourne Rebels; supporters of the Rebels say it was an unfair call and a “premeditated murder”. Read...Newslink ©2024 to Sydney Morning Herald | |
| | |
|
|
| Top Stories |
RUGBY
All Blacks coach Scott Robertson's focus is on the here and now rather than a long-term plan towards 2027 for the time being More...
|
BUSINESS
Increased freighting costs may see consumers paying higher prices at the checkout More...
|
|
| Today's News |
| News Search |
|
|