Computers

First Look Roundup: Apple iMac Pro

First Look Roundup: Apple iMac Pro

This site may earn affiliate commissions from the links on this page. Terms of use.

After years of fending off questions from its increasingly unhappy workstation customers, Apple has finally released an updated professional Mac. The new iMac Pro is built on an iMac form factor, but packs considerably more horsepower under the hood than your typical iMac or even the older Mac Pro. Before we round up early reviewer impressions, let’s review the system’s baseline specifications.

The $4,999 iMac Pro system comes with what appears to be a custom Xeon W processor; Intel doesn’t show an eight-core Xeon W with a 3.2GHz base and a 4.2GHz boost. This chip could be a downclocked Xeon W-2145 (3.7GHz base, 4.5GHz boost). 32GB of DDR4-2666 memory ships standard, along with a 1TB SSD, a Radeon Vega 56, 10Gb ethernet, four Thunderbolt 3 ports (USB-C style), and a 27-inch 5K (5120×2880) display that’s compatible with the DCI-P3 color standard.

iMac Pro Ports

For comparison, the old Mac Pro ran $3,999 for a 3GHz Xeon CPU, 16GB of DDR3-1866, a 256GB SSD, and a pair of GCN 1.0 D700 GPUs. While I realize people will argue about the relative value of Mac versus PC workstations until the end of time, there’s no arguing the iMac Pro is a much better value than its predecessor. But how does the entire package mesh, and does it meet early reviewer expectations? Let’s take a look, with the caveat that these write-ups all appear to be first looks or previews rather than full-on reviews.

Macworld said the system has an entirely revamped cooling-plus-blower setup that Apple claims allows for 80 percent better cooling compared with traditional iMacs (with a high-end CPU and GPU packed into the same form factor, excellent airflow is essential). There’s a new T2 security chip onboard for handling FaceTime, LEDs, storage devices, file encryption, and a new security feature Apple didn’t demo at the event.

Veteran tech journalist Lance Ulanoff spent more time discussing Apple’s various demos and applications, describing himself as sitting in “stunned disbelief” at what the new iMac Pro could accomplish (albeit with a 10-core CPU). He also highlighted the work developers are doing to implement VR support on the iMac Pro — the new Vega 56 GPU will come in extremely handy for that sort of work, especially compared with the old, GCN 1.0 (Tahiti) GPUs that were packed into the Mac Pro.

Ars Technica said the 8-core and 10-core versions are available today, with 14-core and 18-core systems arriving in early 2018. All of these CPUs support AVX512 and all have two FMA units (some Intel CPUs have just one). Ars also noted the 18-core chip won’t always be unilaterally faster than the 10-core CPU, thanks to differences in application thread support and overall power and heat profile. The 18-core CPU has a base frequency of 2.3GHz, while the 10-core chip runs at 3.3GHz base. This means some workloads will be faster on the 10-core than on the 18-core. Logic Pro and Final Cut have been updated to coincide with the new launch. Ars has the most comprehensive software review, if you’re looking for a discussion on the applications Apple demoed.

The Verge is the least positive about the new system of the sites we’ve rounded up. It dings the iMac Pro for its near-total lack of expandability, and writes: “If you’re going to buy this machine, my opinion is that you should know precisely what you plan on using it for — with more clarity than other computer purchases require. That’s not just because the price is exorbitant compared to consumer-grade computers, either. It’s also because if you simply need a radically powerful machine, there’s another professional-grade Mac coming next year, the announced but as-yet unseen Mac Pro.”

iMac Pro

Image by Matthew Buzzi

One common theme to everyone’s coverage is that the demos Apple showed were both comprehensive and impressive. Every company picks workloads that will show its hardware in a positive light, but Apple threw the kitchen sink at these machines and they didn’t falter under the load.

Nobody is giving out a recommendation on buying or not-buying the iMac Pro without the opportunity to review it first. The general opinion, however, is that the iMac Pro looks like a great system on its own terms, but it’s also an extremely locked-down platform. Apple apparently expects RAM upgrades to be installed by a service provider. In short, it’s a genuinely powerful machine for today, but locks you into certain compromises in the future. For some buyers, it’s going to make a lot of sense, but as The Verge says, it’s not for everyone. Users who value modularity and upgrade abilities will be better served by the Mac Pro when it arrives.

Published at Thu, 14 Dec 2017 21:33:33 +0000

21 0

AlphaZero Is the New Chess Champion, and Harbinger of a Brave New World in AI

AlphaZero Is the New Chess Champion, and Harbinger of a Brave New World in AI

This site may earn affiliate commissions from the links on this page. Terms of use.

The world has quietly crowned a new chess champion. While it has now been over two decades since a human has been honored with that title, the latest victor represents a breakthrough in another significant way: It’s an algorithm that can be generalized to other learning tasks.

It gets crazier. AlphaZero, the new reigning champion, acquired all its chess know-how in a mere four hours. AlphaZero is almost as different from its fellow AI chess competitors as Deep Blue was from Gary Kasparov, back when the latter first faced off against a supercomputer in 1996. And what’s more, AlphaZero stands to upend not merely the world of chess, but the whole realm of strategic decision-making. If that doesn’t give you pause, it probably should.

From its origins in India, the game of chess has stood the test of time as a measure of strategic intelligence. Games of imperfect information, like the variation of poker known as Texas Hold-‘Em, arguably have more in common with our day-to-day strategic decisions. But chess remains an important measure of how we think about intelligence. Chess requires being able to gauge an opponent’s tactics, memorize hundreds of board positions, and think ahead several moves. At least that was the common approach to the game until recently, and also the way conventional chess AIs like Deep Blue were programmed.

The previous reigning champion, Stockfish 8, was no exception. It used a search engine to explore different move combinations that had been programmed into it by its creators. Such chess engines make widespread use of opening books and endgame tables, effectively supplying the search algorithm with all the commonly accepted chess wisdom from which to draw its moves. AlphaZero, the new champion, soundly defeated Stockfish 8 in a 100-game series without losing a single match to its adversary. To do so, it took a completely different tack.

The creators of AlphaZero, the London-based AI project known as DeepMind, have pioneered an approach to AI known as deep reinforcement learning. Instead of looking at games like Chess and Go as search problems, they treated them as reinforcement learning problems. Reinforcement learning may sound vaguely familiar if you took an Intro to Psychology class in college; it’s precisely the way humans learn. We actually don’t play chess like a search engine, exhaustively exploring different move combinations in our head to find the best one. Rather, through repeated playing we gain a set of associations about different board positions and whether they are advantageous.  Through repeated exposure, good board positions get reinforced in our minds, and poor ones get pruned — though unlike pure reinforcement learning, we may augment this with information taken from books or word of mouth. Then we draw upon these associations during gameplay.

The mathematical basis of how we apply reinforcement learning as humans has been painstakingly worked out over the last 30 years. That brings us to AlphaZero. By simply playing against itself for a mere 4 hours, the equivalent of over 22 million training games, AlphaZero learned the relevant associations with the various chess moves and their outcomes. In doing so, it was learning much the way a human does, but because the computer can compress 100,000 hours of human chess play into a few minutes, it builds up a set of associations far more quickly than we ever could, and over a far wider range of move combinations.

Checkmate Chess Pexels

Building upon research done in psychology and animal cognition, DeepMind created a reinforcement learning algorithm first to conquer a handful of early Atari video games. Realizing the importance of such a multipurpose learning algorithm, Google quickly snapped up the company in a potentially lucrative acquisition. Within a few years, Google demonstrated this by using deep reinforcement learning to optimize the heating and cooling of its data centers, reducing its energy footprint by 15 percent.

Deepmind made further waves by applying reinforcement learning to the board game Go, thought beyond the scope of AI because of its almost infinite variety of move combinations. Now the company has shown that the same approach can dominate in chess. Since reinforcement learning is the method we humans use to gain many kinds of skills, what can deep reinforcement not learn?

Deep reinforcement learning is nothing less than a watershed for AI, and by extension humanity. With the advent of such über-algorithms capable of learning new skills within a matter of hours, and with no human intervention or assistance, we may be looking at the first instance of superintelligence on the planet. How we apply deep reinforcement learning in the years to come is one of the most important questions facing humanity, and the basis of a discussion that needs to be taken up in circles far wider than Silicon Valley boardrooms.

Aaron Krumins is the forthcoming author of a book on reinforcement learning.

Published at Tue, 12 Dec 2017 12:30:02 +0000

13 0

Samsung Begins Mass Production of 2nd Generation 10nm LPP Process Node

Samsung Begins Mass Production of 2nd Generation 10nm LPP Process Node

This site may earn affiliate commissions from the links on this page. Terms of use.

Samsung announced it has begun mass production of parts based on its 2nd generation 10nm LPP process node. It’s a significant step for the company, which faces competition from TSMC and GlobalFoundries for customers who want cutting-edge semiconductor technology.

As process node progressions and the degree of improvement offered by moving from one node to the next have slowed and shrunk respectively, it’s become more common for foundries to split their performance improvements across multiple generations. Samsung’s first generation of 10nm, 10nm LPE, offered 27 percent higher performance or 40 percent lower power consumption compared with its 14nm predecessor. The new 10nm LPP process is less of a jump, with a 10 percent performance improvement or a 15 percent power reduction compared with 10nm LPE parts.

“We will be able to better serve our customers through the migration from 10LPE to 10LPP with improved performance and higher initial yield,” said Ryan Lee, vice president of Foundry Marketing at Samsung Electronics. “Samsung with its long-living 10nm process strategy will continue to work on the evolution of 10nm technology down to 8LPP to offer customers distinct competitive advantages for a wide range of applications.”

Samsung10nm

Samsung and its rival TSMC are taking somewhat different paths with 10nm. TSMC has stated it views 10nm as a short-lived node, while Samsung plans to keep the technology around for a longer period of time. There’s no “right” answer to the question of how to navigate node transitions, particularly given the way node names now lack any objective meaning beyond “Marketing says a new name is better.”

TSMC, GlobalFoundries, Samsung, and Intel have different defined feature sizes at the same node, with Intel typically offering smaller features than the pure-play foundries at the same label. TSMC and Samsung’s 10nm, for example, is expected to match Intel’s 14nm features, while Intel’s 10nm should be the equal of 7nm when the three rival foundries deploy it. There’s also some uncertainty in long-term roadmaps related to EUV availability and the viability of using triple or quadruple patterning for semiconductor designs; these features allow 193nm ArF lithography to etch features at such tiny scales, but they also drive up mask and therefore SoC costs.

Samsung has also announced its new fab, S3, is ready to ramp up on 10nm production and, in the not-too-distant future, EUV integration as well. The company will also build an 8nm node without EUV, to give itself a migration path forward if EUV integration doesn’t go well.

Published at Thu, 30 Nov 2017 15:30:51 +0000

86 0

750 Raspberry Pis Turned Into Supercomputer for Los Alamos National Laboratory

750 Raspberry Pis Turned Into Supercomputer for Los Alamos National Laboratory

This site may earn affiliate commissions from the links on this page. Terms of use.

It’s often a challenge for programmers and scientists to get time on high-performance supercomputers. These machines are expensive to build and maintain, but there’s no substitute for the massively parallel computing environment of a supercomputer. A new project at the Los Alamos National Laboratory’s High Performance Computing Division seeks to make supercomputers more accessible with a little help from some Raspberry Pi clusters.

Los Alamos National Laboratory (LANL) is home to several of the world’s most powerful supercomputers, including Trinity. That machine cost nearly $200 million to build, and its Intel Xeon Phi CPU cores are both powerful and power-hungry. Still, scientists need that sort of power for certain applications. For testing and running simpler programs, the modest ARM chips in the Raspberry Pi could be sufficient when you get enough of them together. LANL worked with Australian BitScope Designs to create its new Pi-powered supercomputer from 750 individual mini-computers.

The device is based on five rack-mount BitScope Cluster Modules. Each one has 150 Raspberry Pi 3 nodes networked together (that’s 750 total Pis). Each Raspberry Pi 3 has a Broadcom BCM2837 system-on-a-chip (SoC) with four 64-bit CPU cores clocked at 1.2GHz. They’re ARM Cortex-A53 reference cores, which are the same thing you’ll find in many budget smartphones running Qualcomm and MediaTek SoCs. This adds up to 3,000 available CPU cores for the full system, but it uses only a fraction of the power needed for a computer like Trinity. LANL estimates the system will need just 1,000 watts at idle and 2,000 watts during typical usage. The maximum load is 4,000 watts. Other supercomputers use between 10 and 25 megawatts of power.

Close-up of BitScope racks.

The Raspberry Pi-based supercomputer will be much slower than a “real” supercomputer, but the system architecture is similar to those more expensive systems. LANL envisions researchers testing their code on the BitScope system before porting the framework to a more powerful system that has a waiting list. Not only does this free up time on supercomputers for more important work, but it also costs much less for researchers to test code on the slower ARM-based systems.

BitScope plans to make the Cluster Modules available for purchase early next year. A single rack with 150 Raspberry Pi nodes will cost around $18,000-20,000. That works out to $120 per node. Of course, a Raspberry Pi board costs just $35 at retail, but these will be pre-configured and networked together for instant parallel computing. That’s not bad when you consider even smaller supercomputers running Intel and AMD chips could cost several million dollars.

Published at Mon, 27 Nov 2017 21:09:40 +0000

35 0

Intel Patches Major Flaws in the Intel Management Engine

Intel Patches Major Flaws in the Intel Management Engine

This site may earn affiliate commissions from the links on this page. Terms of use.

Intel has acknowledged and patched a new suite of security problems affecting its Intel Management Engine. This subsystem controls many low-level capabilities of the SoC, and can be used for features like remote access and Intel’s Trusted Execution Engine. The company has released a list of 10 vulnerabilities across multiple products that are addressed by recent driver updates. Potentially affected systems include:

  • 6th, 7th & 8th Generation Intel® Core™ Processor Family
  • Intel® Xeon® Processor E3-1200 v5 & v6 Product Family
  • Intel® Xeon® Processor Scalable Family
  • Intel® Xeon® Processor W Family
  • Intel® Atom® C3000 Processor Family
  • Apollo Lake Intel® Atom Processor E3900 series
  • Apollo Lake Intel® Pentium™
  • Celeron™ N and J series Processors

That’s Intel’s entire product line dating back to the introduction of Skylake. According to Intel, attackers could impersonate the Intel Management Engine, Server Platform Services, and/or the Trusted Execution Engine, load and execute arbitrary code without the user or OS being aware of it, and destabilize or crash a system altogether.

Intel’s admission of multiple vulnerabilities is likely to raise eyebrows, given the company’s previous conduct regarding IME. Intel goes to great lengths to hide exactly how IME works and there’s no way for the main x86 chip to even snoop on what the IME is doing (the IME has previously run on an embedded 32-bit Argonaut RISC core, though it’s not clear if this is still the case). This means there’s effectively a second operating system running on every single Intel processor, and there’s no way for the user to control it or shut it off (disabling the IME on a motherboard with IME enabled will result in a non-booting system until the capability is re-enabled). While a research team did find a way to turn the function off by setting a single bit, they note that actually doing so could permanently brick a system. Also, it doesn’t work until the system has actually booted and the main CPU has started. As of this writing, Intel has not offered a safe, reliable method for anyone to disable the Intel Management Engine.

IME-Features

Some of the IME’s capabilities

We’ve actually been finding out more about the IME in the past year than in the last half-decade. A Google software engineer recently confirmed that the system runs the MINIX 3 operating system. Google has reportedly been trying to replace proprietary firmware in its own servers, and the Intel IME has been a stumbling block to that process. Intel has released a detection tool so you can check to see if your system is affected by these issues. Updates will have to be issued by firmware vendors, however, so even if your system is impacted it may not receive a fix in the near future.

Published at Wed, 22 Nov 2017 14:00:07 +0000

36 0

Key Windows 10 Anti-Malware Tech Critically Broken

Key Windows 10 Anti-Malware Tech Critically Broken

This site may earn affiliate commissions from the links on this page. Terms of use.

Over a decade ago, Microsoft added support for a key malware mitigation technique that makes it harder for rogue applications to predict which code will be loaded into specific target addresses. This technique, called address space layout randomization (ASLR), stores data in different locations each and every time the application is run. If your code is riddled with security flaws, ASLR won’t secure it, but it will (hopefully) make it a little harder to find and therefore exploit. Or at least, that’s how it’s supposed to work — but Windows 10, it turns out, has a teensy little problem. It stores its supposedly randomized data in exactly the same place, each and every time.

To understand the magnitude of the failure, it may help to think of a loose analogy. Imagine you have an insecure mailbox that’s constantly being robbed. One hypothetical way to deal with this problem is to have many mailboxes scattered across your property. Each day, your long-suffering postal worker puts your mail (4-5 pieces) in a subset of available mailboxes (let’s say, 30 mailboxes total). A person could still search your property and find them, but it’s going to take longer and be more obvious.

Now, imagine that instead of putting your 4-5 pieces of mail in up to five different locations, your mailman stuck it in exactly the same locations, each and every time. That’s more or less what’s happening here and it’s a problem afflicting both Windows 8 and Windows 10. Without any entropy (randomness), there’s no protection offered at all.

There are two ways to enable ASLR. One is to use the /DYNAMICBASE flag provided by the Visual C++ linker. This method still works perfectly, as far as anyone can tell. But since relying on programmers or vendors to always keep their code properly secure is a recipe for disaster, Microsoft also provides tools to force applications to use ASLR whether they’re designed to do so or not. This capability is baked into the Fall Creators Update as the Windows Defender Exploit Guard and was previously available as Microsoft EMET (Enhanced Mitigation Experience Toolkit), a GUI for enabling security measures already baked into the OS. The screenshot below shows the newer Defender Exploit Guard baked into Windows 10 FCU.

Exploit-Protection

The problem is this: Apparently Microsoft’s default ASLR implementation fails to activate a key sorting method of ASLR, known as “bottom-up ASLR.” Microsoft’s own technical documentation describes bottom-up ASLR as method of assigning a base address by searching “for a free region starting from the bottom of the address space (e.g. VirtualAlloc default).” Enabling ASLR without simultaneously enabling bottom-up ASLR means that memory values are stored in exactly the same location each and every time. Here’s how CERT describes the problem:

Although Windows Defender Exploit guard does have a system-wide option for system-wide bottom-up-ASLR, the default GUI value of “On by default” does not reflect the underlying registry value (unset). This causes programs without /DYNAMICBASE to get relocated, but without any entropy. The result of this is that such programs will be relocated, but to the same address every time across reboots and even across different systems. Windows 8 and newer systems that have system-wide ASLR enabled via EMET or Windows Defender Exploit Guard will have non-DYNAMICBASE applications relocated to a predictable location, thus voiding any benefit of mandatory ASLR. This can make exploitation of some classes of vulnerabilities easier.

It finishes on the cheery note that there’s no practical solution to the problem currently available for deployment, but individuals can reenable the security ASLR is supposed to provide by importing the following registry key:

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINESYSTEMCurrentControlSetControlSession Managerkernel]
“MitigationOptions”=hex:00,01,01,00,00,00,00,00,00,00,00,00,00,00,00,00

As always, we do not recommend mucking about in the registry unless you are certain you know what you’re doing. US-CERT has some additional details on both the problem and this fix available on its website. And yes, Windows 7 users, you get to preen a bit — this problem does not affect your operating system.

Published at Mon, 20 Nov 2017 20:12:25 +0000

24 0

Apple’s laptop designs are cornering Mac users | Macworld – Macworld



Macworld



Source link

11 0

Surface Book 2 review: Microsoft gets closer to the ‘ultimate laptop’ – Engadget



Engadget

Surface Book 2 review: Microsoft gets closer to the ‘ultimate laptop
Engadget
The Surface Book 2 is one of the most powerful and well-designed Windows laptops on the market. And thanks to its improved hinge, it doesn’t feel any different than a traditional notebook. It’s the best MacBook Pro competitor we’ve seen yet.
REVIEW: Microsoft’s newest laptop is a powerful alternative to any of Apple’s MacBooks or iPadsBusiness Insider
Microsoft Surface Book 2 review: beauty and brawn, but with limitsThe Verge
Microsoft Surface Book 2 (15-inch) ReviewLaptop Mag
PCWorld –CNBC
all 70 news articles »



Source link

20 0