intelv5-dedciated-server-cpu

Intel Skylake Xeon E3-1200 V5 and Xeon E3-1500 V5 CPU Lineup

The latest Intel Skylake based Xeon E3-1200 V5 and Xeon E3-1500 V5 processor family are now well into our production server line up with many customers already benefiting from the increased performance. The single socket focused workstation chips consist of 14 SKUs that are aimed towards the server, workstation and graphics focused server workloads.

With Skylake, Intel is transitioning from their Denlow Platform to the new Greenlow Platform which is based around the C230 Series chipset. The new platform offers key performance improvements while delivering better graphics capabilities and provides more features. The Xeon E3-1200 V5 processors can support up to 64 GB of DDR4 system ram compared to just 32 GB DDR3/L memory on the older Xeon E3-1200 V4 processors. The chips offer 20 Gen3 PCI-e lanes compared to just 8 Gen2 PCI-e lanes, total of 10 USB / 14 USB 2.0 ports compared to 6 USB 3.0 / 8 USB 2.0 ports and 8 SATA (6 GB/s) ports compared to just 6 SATA (6 GB/s) ports.

The 1S Greenlow Platform will feature three key Xeon product stacks, one of the stack will consist of chips that don’t feature any internal graphics core, one stack consists of chips with Intel iGPU graphics and a last stack with chips featuring Premium graphics such as Iris and Iris Pro with eDRAM. There’s Skylake Xeon 4+4e (Quad Core + GT4e), Skylake Xeon 4+2 (Quad Core +GT2) and Skylake Xeon 4+0 (GT0 *No Graphics). There are a total of five Xeon chips without graphics, three chips with regular GT2 graphics core, three low power chips and three data center graphics chips. All of the chips except the data center graphics SKUs are featured on the LGA socket while the data center graphics chips are BGA only.

intel-server-compare

 

big-sur-facebook

Facebook pips Google with open hardware release

Facebook is releasing the hardware design for a server it uses to train artificial intelligence software.

Code-named Big Sur, Facebook uses the server to run its machine learning programs, a type of AI software that “learns” and gets better at tasks over time. Facebook are contributing Big Sur to the Open Compute Project.

One use for machine learning is image recognition, but it’s being used in all kinds of  data sets, to identify things like email spam and credit card fraud.

Facebook, Google and Microsoft are all pushing hard at AI, which helps them build smarter online services.

Big Sur relies on GPUs, which are often more efficient than CPUs for machine learning tasks. It can have as many as eight high-performance GPUs that each consume up to 300 watts, and can be configured in a variety of ways via PCIe.

Facebook said the GPU-based system is twice as fast as its previous generation of hardware. “And distributing training across eight GPUs allows us to scale the size and speed of our networks by another factor of two,” it said in a blog post Thursday.

Big-Sur-top-view-without-cover

One  thing about Big Sur is that it doesn’t require special cooling or other “unique infrastructure,” Facebook said. High performance computers generate a lot of heat, and keeping them cool can be costly. Some are even immersed in exotic liquids to stop them overheating.

Big Sur doesn’t need any of that, according to Facebook. It hasn’t released the hardware specs yet, but images show a large airflow unit inside the server that presumably contains fans that blow cool air across the components. Facebook says it can use the servers in its air-cooled data centers, which avoid industrial cooling systems to keep costs down.

Like a lot of other Open Compute hardware, it’s designed to be as simple as possible. OCP members are fond of talking about the “gratuitous differentiation” that server vendors put in their products, which can drive up costs and make it harder to manage equipment from different vendors.

“We’ve removed the components that don’t get used very much, and components that fail relatively frequently — such as hard drives and DIMMs — can now be removed and replaced in a few seconds,” Facebook said. All the handles and levers that technicians are supposed to touch are colored green, so the machines can be serviced quickly, and even the motherboard can be removed within a minute. “In fact, Big Sur is almost entirely tool-less –the CPU heat sinks are the only things you need a screwdriver for” Facebook says.

Google is also rolling out machine learning across more of its services. “Machine learning is a core, trans-formative way by which we’re rethinking everything we’re doing,” Google CEO Sundar Pichai said

asr 1000

Prepare to patch your ASR 1000s, net admins

Oh dear Cisco has announced a privilege escalation bug in its Aggregation Service Router 1000 Series.

There’s a lot of cases where local privilege escalation isn’t such a big deal, but it’s a bit more serious when it means a low-privilege sysadmin can get root access to a unit that has 100 Gbps-plus configurations in carrier and ISP deployments.

Described here, the “root shell license bypass vulnerability”, CV-2015-6383, arises from a lack of input filename validation in the CLI.

“An attacker could exploit this vulnerability by authenticating to the affected device and crafting specific file names for use when loading packages”, the advisory explains.

That bypasses the license required for root shell access, Cisco says – and that means they’d thoroughly own the device.

Of course, there’s also the matter of people getting root access who haven’t paid the appropriate license fee, but surely no customer would take advantage of a bug merely to break their license conditions and save money.

The vulnerability affects ASR 100 Series devices running at version 15.4(3)S, and they have to be patched because there’s no workaround.

google-cloud-outage

Google cloud outage caused by sys admins

A mistaken peering advertisement from a European network took Google Cloud’s europe-west1 region offline last week for around 70 minutes.

The slip-up happened when an unnamed network owner connected a new peering link to Google, and in the process, it advertised reachability for far more traffic than it could handle.