Home ยป Forum ยป Author Hangout

Forum: Author Hangout

Comparitive Processing Power (very OT)

awnlee jawking ๐Ÿšซ

I use a refurbished Windows XP machine, one of the last and allegedly most powerful produced by Dell.

I was wondering, to an order of magnitude, how much more processing power does the latest iPhone have?

AJ

John Demille ๐Ÿšซ
Updated:

@awnlee jawking

You didn't provide the specs for that windows XP machine, so all is guessing.

The thing about the iPhone processor vs a general processor like the intel ones, is that the iPhone have more units that the intel lacks to begin with. So processing power for different jobs on the phone will be differently powerful depending on the job at hand.

For example, straight word processing, I would estimate that the CPU core on the intel machine to be about a tenth to a half of one of the iPhone CPU cores. The search and replace that would take 30 seconds to finish on my old Mac, would take roughly 5 seconds on my current one.

I compare Mac vs old Mac becaue the current Mac has an M2 processor that is comparable on per core basis to the iPhone 15.

For something like video encoding, the difference is ridiculous because the iPhone has a neural engine that makes it ludicrously faster than the CPU in the windows machine.

I know from my own experience, the Mac that I had in 2012 could re-encode an MPG2 file into h264 MP4 file at the rate of about 10 frames per second.

My current Mac with a neural engine (the same as the iPhone) encodes video at between 850 to 950 frames per second, roughly two whole orders of magnitude faster.

So, it depends on the job.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@John Demille

Thank you, that's a very good answer.

I don't have the specs for my machine. I could probably look them up but I've never bothered - it does the job I need. FWIW, it says "Optiplex 780" on the chassis.

AJ

Replies:   Dinsdale
Dinsdale ๐Ÿšซ

@awnlee jawking

The setup doc for that particular model is dated March 2010.
The top model had an Intel Core 2 Quad processor, the next one down an Intel Core 2 Duo (I had one with that and it was fast enough for simple tasks).
Memory: 4 DIMMs, each 1 GB to 4 GB.
It only has USB 2.0, the Ethernet controller is 10/100/1000.

I'd expect the graphics to be rather inadequate, if it has a WLAN option then an outdated one, and it should be a good source of heat during the winter months.

Keet ๐Ÿšซ

@awnlee jawking

You want to compare a 20 year old PC system to a current high end phone? It's hardly reasonable to compare a PC to a mobile phone. At the same age a PC will outperform any phone. When you take a 20 year gap in development into account there's nothing left to compare since you can't use the same performance tests (or any other software).
I think any current high end mobile phone will outperform a 20 year old PC if there was a way to measure it. Current mobile phones are virtually PC's in a small form factor. The major things holding them back in getting even better performance is heat handling and battery power.

Replies:   awnlee jawking  Mushroom
awnlee jawking ๐Ÿšซ

@Keet

I'm not expecting a truly objective measure, more like a broad-sweeping estimate cf 'The Sinclair Spectrum had more computer power than the Apollo 11 moon landing computer'.

AJ

Replies:   Keet
Keet ๐Ÿšซ

@awnlee jawking

I'm not expecting a truly objective measure, more like a broad-sweeping estimate cf 'The Sinclair Spectrum had more computer power than the Apollo 11 moon landing computer'.

Objective is obviously not possible, subjectively you would have to know the specs of the system. Like most systems the actual hardware configuration varied a lot. 1 or 4 GB of memory makes a lot of difference. The specific processor too. The lowest possible configuration won't come close to what the average current mobile phone can do, the highest possible configuration could possibly come a lot nearer depending on what software you test it with.

Mushroom ๐Ÿšซ

@Keet

I think any current high end mobile phone will outperform a 20 year old PC if there was a way to measure it.

I would say even then there is no comparison. I was doing video editing and rendering on contemporary computers over 2 decades ago with Adobe Premiere.

Now granted, that was the single core 32 bit era, and it could be dog slow but it worked. I want to say it was around 10 minutes render time per 1 minute of NTSC video output. And those with money were running dual CPU systems.

But even a modern iPhone has some serious issues doing anything more than very basic editing with Adobe Premiere Rush, and that is a very highly cut down version that was designed to work on that platform. Most I have talked to said it is almost worthless, and has render times that are horrible, in the range of 15 minutes per minute of video. And that is if it does not crash, as the device has horrible problems trying to process that much data in addition to doing all the other "phone things".

An actual computer does not have that issue. As I write this, I am rendering a 3.5 hour long video, playing Civilization V, watching a clip from the next project I am going to edit, and typing this all at the same time.

Good luck getting any phone to handle all of that. And my computer is not all that new, it's a 4 core model that is about 6 years old.

In order to actually have a computer that a phone could beat in most processing, I think one would have to go all the way back to the first gen 486 or even 386. Those low power ARM processers really are crap when it comes to actually hauling around and digesting a large amount of data.

Replies:   helmut_meukel  Keet
helmut_meukel ๐Ÿšซ

@Mushroom

all the way back to the first gen 486 or even 386.

Back then I had a discussion with the service guy at my preferred computer shop. He wanted me to "upgrade" my motherboard (AMD 386-40, no FP co-processor) to an 486 25 MHz.
I declined because with the workload I used to do the integrated FP unit wouldn't be used much and the 40 MHZ 386 outperformed the 25 MHz 486 visibly. I had my system there to upgrade some other card and we run some tests with actual programs, including one computing all prime numbers below a set value, we choose 10,000.
My system was always faster than his 486-25, most noticeable โ€“ to his astonishment โ€“ with the prime number program! I told him for computing prime numbers you would use long integers (as I did in this program) not floating point numbers, therefore no advantage for the 486 processor class.

HM.

Replies:   Mushroom
Mushroom ๐Ÿšซ

@helmut_meukel

My system was always faster than his 486-25, most noticeable โ€“ to his astonishment โ€“ with the prime number program! I told him for computing prime numbers you would use long integers (as I did in this program) not floating point numbers, therefore no advantage for the 486 processor class.

Unless it was an operation that took advantage of the built in 8k of cache.

But in reality, the difference between the 286 and 486 was negligible. The 486 was just a 386 with the 387 and an 8k cache built in. And the first ones as you said would have done no better or even worse against the supercharged AMD chips.

I actually went a bit crazy then, and got a 486-50. Not the DX 2, the original that actually ran at 50 MHz. The thing was bloody fast, but also could be unstable so hardware integration was always a bit of a bitch with it. and it amazed my prick brother-in-law a few years later when he bragged he just got a new DX2-50 and it was one of the fastest systems out there.

I laughed, challenged him to a stone test for a six pack, and let them churn away. and when it became immediately obvious I would win I was explaining to him about clock doubling, wait states, and that his was really an internally doubled system that was running on a 25 MHz bus.

But yes, I have absolutely no doubt an AM386-40 would be an I486-25. They are almost the same CPU minus 8k and the CoPro, but the AMD would be working at almost double the bus speed.

I have been a big AMD supporter for decades actually, because they had long tried to give actual performance improvements instead of just a new coat of paint on an old CPU like Intel was doing in that era. Not unlike how NEC did with their V20-30 chips. In the day I had seen 8086-10 chips getting smacked by NEV V30-8 chips. The 86 had a faster CPU, but the V30 had a much more efficient CPU.

Like what I said, there is a hell of a lot at play other than raw CPU count and things like that. Bus speeds, bus width, wait states, doubling and other tricks internal to the chip and not the entire system, things like that, wattage, etc. That is why two chips may appear the same or one better, but when put into real world situations one of a clear winner over the other. Even if on paper it seems it should be faster.

Replies:   Dinsdale
Dinsdale ๐Ÿšซ

@Mushroom

The 486 did have one advantage over the 386, a faster bus. Of course the VESA Bus died a quick death when PCI was introduced.
VESA was no advantage for CPU-only processes, but that was not exactly the normal case.

Replies:   helmut_meukel
helmut_meukel ๐Ÿšซ

@Dinsdale

Most boards had only one VESA slot, a few had two. The 486 couldn't reliable handle more than 2 VESA slots. On the 1 VESA slot boards this slot was used by the graphics card. Very few other cards were available for VESA bus slots.
The EISA bus could be used with 386 and 486 processors.

I fondly remember my large motherboard with the AM386-40 processor, one lonely ISA slot, all other slots were EISA.
It was a desktop the size of a big tower laying on its side and I got it as a bargain because most customers preferred smaller boxes.

BTW, I used this system for about 12 years and afterwards the empty box on an cheap camping table under my open charcoal grill for isolation and proper height. (I still have the box).

HM.

Keet ๐Ÿšซ

@Mushroom

An actual computer does not have that issue. As I write this, I am rendering a 3.5 hour long video, playing Civilization V, watching a clip from the next project I am going to edit, and typing this all at the same time.

Good luck getting any phone to handle all of that. And my computer is not all that new, it's a 4 core model that is about 6 years old.

And that's why I started my first response stating that you can't make a real comparison. A phone against a PC is like comparing apples and oranges. What most users don't realize is that a phone is mostly designed to run one (or a few minor) tasks at the same time where a PC's power is capable of handling multiple heavy loads concurrently. A phone will never be able to handle that until the small sized processors become more powerful, battery power increases significantly, and cooling gets way better. I won't even mention that the size of the screen of a phone makes the use of some PC applications virtually impossible.

There is one exception, more or less. I have a phone that runs a full Linux distribution so in theory I could run multiple heavy loads with normal PC applications at the same time because the Operating System is designed for it. In reality that is very limited, even when connected to auxiliary power. The physically small sized processor and of course heat development will throttle the performance. The screen size is no problem because I can dock my phone to a special docking station with a larger screen. In short, you still can't compare a phone to a PC.

Replies:   awnlee jawking  Mushroom
awnlee jawking ๐Ÿšซ

@Keet

I have a phone that runs a full Linux distribution

This question may be completely stupid and expose my lack of knowledge of current phone technology, but are all the main apps available for your phone? Can you pay for car parking, do online banking, identify bird calls etc?

AJ

Replies:   Keet
Keet ๐Ÿšซ

@awnlee jawking

This question may be completely stupid and expose my lack of knowledge of current phone technology, but are all the main apps available for your phone? Can you pay for car parking, do online banking, identify bird calls etc?

Since it's a full Linux distribution I can install and run anything available for it. So it's no problem to install and run for example GIMP, LibreOffice, Blender, etc. If there's an application for the options you mentioned I can install and run them or otherwise use a website that offers the same functionality. For example online banking. There are apps for Android and Apple but I just use the same website I use on my PC. If the screen is too small I attach my phone to a lapdock which is just a touch screen, keyboard, track pad, and extra battery pack. The phone+dock essentially 'creates' a laptop.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@Keet

There are apps for Android and Apple

To me it seems that eg parking apps are generally available for both Android or Apple. Aren't they Linux-like? Will they run on Linux, or have Apple and Google made their versions proprietary?

AJ

Replies:   Dominions Son  Keet
Dominions Son ๐Ÿšซ

@awnlee jawking

My understanding is that the Android OS is build on a modified Linux kernel.

Keet ๐Ÿšซ

@awnlee jawking

To me it seems that eg parking apps are generally available for both Android or Apple. Aren't they Linux-like? Will they run on Linux, or have Apple and Google made their versions proprietary?

If there is no alternative application for Linux there probably is a site where you can do the same. There are ways to run Android apps on Linux (for example with WayDroid you can run a full Android system on Linux) but for me that would compromise the whole idea of avoiding Android (i.e. Google). Overall I can do pretty much anything I want on this phone.
(Apple never was and never will be an alternative for me and that's not because of the expensive hardware, my Librem5 phone isn't exactly cheap either.)

Mushroom ๐Ÿšซ

@Keet

What most users don't realize is that a phone is mostly designed to run one (or a few minor) tasks at the same time where a PC's power is capable of handling multiple heavy loads concurrently. A phone will never be able to handle that until the small sized processors become more powerful, battery power increases significantly, and cooling gets way better.

I have seen these trends come and go many times over the decades. Like when Java first came out and many in the industry were predicting the era of the desktop was over and we were all going to migrate to thin client systems running aps that were streamed from the internet or a server.

Well, that sure as hell never happened. I did work with a few, they were interesting gadgets but ultimately it was like trying to do "real work" on something with the power of a WebTV box. And the server load was insane on them (a huge issue when we were just migrating from 10mb Ethernet to 100mb Ethernet).

I often chuckle, as I have been in the industry to see a lot of things like that come and go over the decades. And with the phone, I don't see it ever getting that fast. More than battery power it is wattage, and I can't see batteries ever getting powerful enough to overcome that issue. They are better than they were two decades ago, but that is more length of use than an actual improvement of the wattage they can provide. That is why a laptop on battery will lose a lot of processing power, they have to give that up in exchange to not being plugged into the wall. And even at full power, are not as powerful as their desktop variant CPUs.

The longest I ever went without my desktop was 2009-2010 when I was in the ME for a year and could not take a desktop. And it sucked, as many things I never worried about were barely possible on the dual core laptop I bought specifically for that deployment. Even though on paper that laptop should have smoked my 2 year old desktop. It could not touch it, as it used a lower power CPU and system optimized to work on batteries.

I agree, an actual comparison is almost impossible. Unless phone systems make a huge evolutionary leap. And even then, desktops will have made the same leap so will always be ahead. But probably the fastest leaps we have had since the PC revolution started was the 2000s. Flying from single core 32 bit systems to multi-core 64 bit systems. But ever since then, we have largely stagnated. Fundamentally no real changes in almost 2 decades other than shoving on more and more cores to the CPU.

And to be honest, I largely checked out of that nonsense almost a decade ago personally. When I realized my 2 year old system could barely handle a game that my 10 year old console could play with no problem, I realized that computer gaming was broken. And I simply refuse to shell out $500 every couple of years just for a video card.

And I have upgraded both since then, and it is no different. My old Xbone can still play games that my PC can barely handle. Not that the Xbone is that good, is that coders are paid to program to support the video card industry.

Grant ๐Ÿšซ

@awnlee jawking

You can make use of Passmark as a very rough indicator of comparative performance.

https://www.cpubenchmark.net/compare/5683vs1039/Apple-A17-Pro-vs-Intel-Core2-Quad-Q6700

awnlee jawking ๐Ÿšซ

@Grant

Thank you, that's great!

AJ

Dinsdale ๐Ÿšซ

@Grant

That's a good resource. I fed a couple of processors I have used over the last year (one was too slow to use so I replaced it with the other one) for a comparison.
The A8 was incapable of playing videos over a certain size, it also tended to max-out when applying software updates (they had to be decompressed). The A8 shows up as 50% faster than the Intel Core2 Quad you suspect is in the Dell machine.
I'm a bit sceptical there, the Dell machine came out 18 months after that processor so I'd expect them to have used something more recent.
Oh, and the Ryzen 5 is crazy fast for my purposes. I'd have bought a Ryzen 5 5xxxG now but it is not significantly faster.

Replies:   Mushroom
Mushroom ๐Ÿšซ

@Dinsdale

I'm a bit sceptical there, the Dell machine came out 18 months after that processor so I'd expect them to have used something more recent.

That is where knowing the exact model and entry point is crucial.

Like most companies, Dell made a large variety of systems at any time, from things like the almost laughably cheap Celeron to the highest end Intel CPUs at the time. And most would have a "shelf life" of around a year before they would replace it with the newest model.

And that is not even going into the differences between their home user models and their corporate grade models. There is almost no comparison between the Dimension of that era (home computer), and the Optiplex (corporate computer). That is why one started at around $250, the other started at around $800.

That is why those that want the "newest" do not go to the name brands, they make it themselves or have it made for them.

Mushroom ๐Ÿšซ

@Grant

Sorry, I don't buy it.

I just ran it compared to mine, and it claims the iPhone is over twice as powerful as my desktop. But once again, I defy any ARM processor to handle the tasks that my desktop does.

like actually rendering over 3 hours of 1440x1080 HD video. I actually do that several times a week on average, and as I have been doing it for over 20 years I know exactly how CPU intensive that is. But according to that site, I should be able to dump my computer and do that twice as fast on my phone.

Sorry, calling complete and utter BS on that claim.

Mushroom ๐Ÿšซ

@awnlee jawking

I was wondering, to an order of magnitude, how much more processing power does the latest iPhone have?

The two really do not compare, as the architecture is very different in each.

Apple uses essentially an offshoot of ARM architecture, which is designed specifically for small portable devices that often operate on batteries. Where as your computer will have a full power CPU that is designed to operate on "mains" power. So when comparing things like "cores" and the like there really is no comparison.

Like trying to compare an Alpha 64 bit RISC processor of the 1980s with a 32 bit CISC processor from Intel during the Pentium era. It almost literally is apples and grapefruit.

That is why nobody seriously does things like video editing on an ARM class processor. Not even on the netbooks and other computers of that class, they simply can't handle those kinds of tasks in an effective manner.

Even taking into consideration like transistor counts had become pointless in many instances. One can only really compare different CPUs that are of a similar capability.

Replies:   John Demille
John Demille ๐Ÿšซ

@Mushroom

That is why nobody seriously does things like video editing on an ARM class processor. Not even on the netbooks and other computers of that class, they simply can't handle th

Sorry. But you're talking out of your ass. While what you said may have been true in the 2000s, it's not anymore.

My Mac Studio can prove you wrong in a second.

Replies:   Mushroom
Mushroom ๐Ÿšซ
Updated:

@John Demille

My Mac Studio can prove you wrong in a second.

That is also not an iPhone and is using a different processor. Yes, it is still an ARM processor, but it has significantly more power than the one in our phones. The M1 and M2 series processors those use are related to those in the iPhone, but only barely. It also uses a real operating system as they use the macOS, not the iOS of the phone.

Once again, compare like things. Your Mac Studio is not an iPhone, and does not use an iPhone processor. This can be seen in the wattage alone. The A16 draws only 7.6 watts. The M2 draws 295 watts.

Yes, both are ARM processors, but one is typical of them and is designed to operate entirely on batteries so is a very low power processor. The other one has been modified heavily and operates at a much higher level of power, has between 4 and 9 times the transistor count, in addition to 24-400% the number of cores. And that is not even going into the difference in GPU count, L1 and L2 cache sizes, etc, etc, etc.

So no, I am not talking out my arse. I know exactly what I am talking about. Your Mac Studio is really a small form factor desktop, it is not a cell phone.

And yes, I know there are very high end ARM processors out there. But 90% of the people will never see one. Like the worlds fastest supercomputer, which uses ARM processors. However, it also uses over 7.2 million cores and pulls 28 megawatts of power.

When talking about cluster computing like that, the actual power of each individual processor matters little when compared to the sheer number of them. That is why most digital editing companies used dual processor computers a couple of decades ago. Each individual CPU may not be as fast, but combined can easily outperform a faster single core one. That is why I made dozens of Athlon MP systems for video and animation studios, far more than I did single processor P4 systems. And a lot still used dual processor P3 systems at the time, as those would still outperform a P4 for processor intensive operations.

Replies:   John Demille
John Demille ๐Ÿšซ

@Mushroom

That is also not an iPhone and is using a different processor. Yes, it is still an ARM processor, but it has significantly more power than the one in our phones.

What you said was 'ARM Class', you didn't say 'Phone CPU'. So my point still applies.

The M2 draws 295 watts.

Do you think the M2 is made by Intel? The max draw that any testing website managed with the M2 Ultra (with the 74 cores GPU) was 61 watts. Yes, that's more than the phone. But nowhere near what Intel's hardware draws these days.

like actually rendering over 3 hours of 1440x1080 HD video.

You should talk to my wife then, she does videos on her iPhone regularly. Maybe not three hours, but she'd shown me up to 30 minutes ones.

Being a desktop man myself, I have no idea how she can do all those videos she posts online on her phone. iMovie on the phone can render video surprisingly quickly.

The problem with doing 3 hours video on the phone is not the inadequacy of the hardware, trust me, the iPhone's CPU is more than capable of doing it. The problem would be the not-as-convenient work space considering how small the screen is.

Apple's ARM compatible designs are not what you would expect when you hear phone cpu. For old fogeys like us, the stuff they're stuffing into phones these days are the stuff of science fiction.

Replies:   Mushroom
Mushroom ๐Ÿšซ

@John Demille

Do you think the M2 is made by Intel?

What does that matter? You are getting lost in the chaff there.

For over a decade, the most powerful CPU used in desktop systems was not made by Intel, but by DEC. And both AMD and Intel have been a major player in the field for decades. Heck, AMD was actually the company that Intel themselves used for many years to make their x87 math co-processors.

That is why so many laughed when Intel tried to smear them as making "clown chips", and when they finally made their own x87 chip, we got the "Pentium Bug".

As an FYI, I have been in this industry for over 4 decades now, and started working corporate side as a Mac Tech in 1994. So do not think I am "owned" by any one company or tech. But one does have to try and compare like technologies the same, and not try and mix and match two very different things.

And no, the problems are not what you lay out. There is nothing stopping somebody from hooking up their phone or netbook/tablet to a real screen and real keyboard and mouse. I should know, I did that myself when doing remote terminal work on UNIX systems.

That would let me use my Google Nexus 7 just like it was a desktop as far as ease of use. But it was still just a tablet that had all the restrictions of a tablet. And the quad core Krait 1.5 GHz CPU at the time was damned powerful - for a tablet.

But it was still a tablet CPU. Yes, the M2 is still an ARM processor, and it makes up for that by shoveling in from 8 to 24 cores to help make up for that.

That alone shows how limited each individual core is, that it has to shovel in so many just to compare with most desktops with 3 to 4 cores.

Replies:   Grey Wolf  John Demille
Grey Wolf ๐Ÿšซ

@Mushroom

That alone shows how limited each individual core is, that it has to shovel in so many just to compare with most desktops with 3 to 4 cores.

What comparable desktops are you looking at that have 3 to 4 cores? My 12th-Gen Intel-based laptop has 14 cores. Many Intel desktops are 12+. Ryzen desktop CPUs go up to 64 cores.

In the previous Intel family (Alder Lake, time-comparable to M2), there are only a handful of processors with 2 cores (very-low-end Pentium and Celeron-branded, one i3) and 4 cores (some i3's). Some Pentiums and Celerons have 5, some i3's have 6. None of those systems will keep up with the lowest-end 8-core M2.

You're not going to find many '3 to 4 core' desktops that will even vaguely keep up with an M2. I suspect the number is zero.

John Demille ๐Ÿšซ
Updated:

@Mushroom

Do you think the M2 is made by Intel?

What does that matter? You are getting lost in the chaff there.

Either you don't detect sarcasm, or you're simply ignoring the statement for what it is.

You said an M2 consumed 295 watts. Apple CPU don't reach that high in energy consumption.

This discussion is going nowhere obviously because you keep sidestepping the arguments and going on different tangents. I don't know what you're trying to prove.

OP asked about perception of speed in a new phone vs an old old desktop. We provided that info and you've been arguing against all of it.

Check this page:

https://www.cpu-monkey.com/en/cpu_benchmark-geekbench_6_single_core

For single core performance. That eliminates the large number of cores that desktops have vs phones.

The latest iPhone 15 pro has an A17 chip. The fastest intel processor is Intel Core i9-14900KF at this time.

Apple A17 Pro scores 2,952

Intel Core i9-14900KF scores 3,289

To round the numbers, the A17 single core scores 90% of the single core intel. One inside a phone with very high restriction on energy consumption and heat disippation running at 3.2Ghz the other running at an unlimited energy consumption and heat at 6Ghz.

So the phone is no slouch no matter how much you shit on it.

The iPhone's CPU has hardware video encoders and decoders that are desktop class. It encodes x265, 4K video at 120 frames per second. That is unbelievable on a desktop 5 years ago. All in a phone running on a battery.

richardshagrin ๐Ÿšซ

@awnlee jawking

All this talk about computers is probably science fiction.

Replies:   awnlee jawking
awnlee jawking ๐Ÿšซ

@richardshagrin

All this talk about computers is probably science fiction.

Undoubtably. And all the talk of flying to the moon. We haven't even travelled as far as the edge of the world yet.

Bah humbug!

AJ

Replies:   Mushroom
Mushroom ๐Ÿšซ

@awnlee jawking

And all the talk of flying to the moon.

Well, as there is no atmosphere once you pass around 100 miles, I would semantically argue that nobody ever has or ever will "fly" to the moon. We have traveled there, but never "flew" there.

Grey Wolf ๐Ÿšซ

@awnlee jawking

All of the discussion around ARM vs x86-64 (which amounts to a rehash of RISC vs CISC) is completely missing a bunch of variables which have far more to do with system performance than architecture, and it pains me that so many people have talked right past them and ascribed performance differences to a whole bunch of red herrings.

Those variables are: RAM, storage, thermal characteristics, and power consumption. A few brave people touched on thermals and power, but those aren't even as important to getting a handle on this compared to the first two.

Thought experiment: take a really good high-end Intel CPU and GPU. Now, give it 6GB of RAM (same as a top-of-the-line iPhone) - mind you, that's 6GB total between CPU and GPU - and PCI Gen2-class NVMe storage (again, same as the iPhone). How fast do you think that system is going to render 4K video?

Answer: about as fast as the iPhone. Video rendering is very memory-intensive and storage intensive. Gotta get those bits off storage, into memory, manipulate them, then push them back into storage. Over, and over, and over.

For further fun, limit the PC to passive cooling only. If you can make it run at all, how fast is it going to render video?

Those design constraints matter a lot - much more than CPU architecture. Desktop M2's and M3's make it abundantly clear that it's not an architecture issue, it's an infrastructure issue.

It's not an OS issue either. iPhones are running a Unix(/Mach) kernel and have been since they first came out. Android phones are running a modified version of Linux. Both are basically modern Unix boxes - definitely with some optimizations for low-power environments, but nothing that drastically limits potential performance.

Give the iPhone a task that runs in working RAM and compare that to the PC and you'll get something approaching a useful benchmark - but that's only real-world if your task matches that. An iPhone is an iffy choice as a workhorse video rendering system not because it's got an ARM processor or because it's running the iOS flavor of Unix, but because it's got significant RAM, storage, thermal, and power constraints.

The OP said 'comparative processing power'. A 2023 iPhone will run rings around nearly any desktop system 10 years old unless it's a memory intensive task and the desktop was highly atypical for a desktop of that generation in terms of memory configuration (the iPhone's storage speed is faster than SATA III, so it's not an issue), even given the thermal and power constraints.

A same-generation desktop with lots of memory, fast storage, active cooling, and plenty of power will run rings around the iPhone.

Slap an ultra-low-power Intel chip in the phone and it'll be molasses. There's a reason no one puts x86-64-architecture chips in phones - making that architecture run fast is hard and takes a lot of power, comparatively speaking.

The jury is still out on whether a well-designed ARM chip in an actively cooled system with ample memory and fast storage will inherently be worse, the same as, or better than a well-designed x86-64 chip in the same environment - and it's likely that it'll wind up being workload-specific anyway.

But that's also beside the point of 'how much faster is my phone than an old desktop?'

Dinsdale ๐Ÿšซ
Updated:

@awnlee jawking

I had to think of this thread when I read How a Cray-1 Supercomputer Compares to a Raspberry Pi today, spoiler alert, the Raspberry comes in at 4ยฝ x the Cray.
All rather amusing.

Back to Top

Close
 

WARNING! ADULT CONTENT...

Storiesonline is for adult entertainment only. By accessing this site you declare that you are of legal age and that you agree with our Terms of Service and Privacy Policy.