Thread: Anandtech News
01-10-17, 12:38 PM #6621
Anandtech: Dell Announces UP3218K: Its First 8K Display, Due in March
Up to now, 8K reference displays were available only from Canon, in very low quantities and at very high prices. The displays were primarily aimed at video professionals from TV broadcasting companies like NHK, who are working on 8K (they call it Super Hi-Vision) content to be available over-the-air in select regions of Japan next year. A number of TV makers have also announced their ultra large 8K UHDTVs, but these are hardly found in retail. Overall, Dell is the first company to offer an 8K display that can be bought online by any individual with the money and be focused on the monitor market rather than TVs.
At present, Dell is not publishing the full specifications of its UltraSharp 32 Ultra HD 8K monitor (UP3218K), but reveals key specs like resolution (7680×4320), contrast ratio (1300:1), brightness (400 nits), pixel density (280 ppi) as well as supported color spaces: 100% Adobe RGB and 100% sRGB.
For interconnection with host PCs, as a single DisplayPort 1.3/1.4 cable does not provide enough bandwidth for the 7680×4320@60 Hz configuration Dell is targeting, the UltraSharp UP3218K uses two DisplayPort 1.3 inputs to provide the necessary bandwidth, stitching the two display streams together internally using tiling. This is similar to early 5K displays, which used a pair of DisplayPort cables to get around the bandwidth limitations of DisplayPort 1.2. Using two cables not a big problem given the target market, but it's interesting to note that because 7680×4320@60Hz alone consumes all of the bandwidth supplied by the two cables, there isn't any leftover bandwidth to support HDR or the Rec. 2020 color space.
Dell UltraSharp 32 Ultra HD 8K (UP3218K)
Panel 32" (IPS?) Resolution 7680 × 4320 Brightness 400 cd/m² Contrast Ratio 1300:1 Refresh Rate 60 Hz Viewing Angles 178°/178° horizontal/vertical Color Saturation 100% Adobe RGB
Display Colors 1.07 billion Inputs 2 × DisplayPort 1.3
On a side note, while the company could have used DisplayPort 1.4's Display Stream Compression 1.2 (DSC) feature to reduce the bandwidth requirements of the monitor, they opted not to. DSC is promoted as visually lossless, but given how demanding many professionals are and problems that potential artifacts introduced by DSC could bring, Dell decided to stick to two DP 1.4 cables as a result.
While a high display resolution is good for photos and images, it also makes everything smaller; and while modern operating systems support scaling, it does not work perfectly for all programs. It's likely that professional applications like AutoCAD or Photoshop will support 8K the day the UltraSharp UP3218K hits the market, but general use applications, already struggling with 4K and HiDPI in general, will be another matter. Practically speaking, if the price tag alone isn't convincing enough that this is a monitor for specific editing tasks and not for general desktop usage, then the lack of good HiDPI support elsewhere will. And while I'm sure someone will try to use the UP3218K as a gaming display, at four times the resolution of a 4K monitor, we're at least a few years off from GPUs being able to render high-fidelity games at a 33Mpix resolution.
Dell promised to start sales of the Dell UltraSharp 32 Ultra HD 8K monitor (UP3218K) on March 23 on its web-site. Initially, the monitor is stated to cost $4999. Time to put in some hardware requisition forms.
Gallery: Dell Announces UP3218K: Its First 8K Display, Due in March
- ASUS ProArt PA32U Display: 4K, 1000 Nits Brightness, 95% DCI-P3, 85% Rec. 2020
- LG Announces 32UD99: 4K IPS Display with 95% DCI-P3, HDR and USB-C
- BenQ Launches the SW320: a 4K Display with HDR for Professionals
- Panasonic Develops IPS Panel with 1,000,000:1 Contrast Ratio, 1000 Nits Brightness
- EIZO Launches ColorEdge CG2730 and CS2730 2560×1440 Displays for Professionals and Prosumers
- Dell Introduces UltraSharp UP3017 30-Inch Professional Display with 16:10 Aspect Ratio and DCI-P3 Color Space
01-10-17, 02:23 PM #6622
Anandtech: NVIDIA Launches SHIELD TV: Smart Home Functionality, More 4K HDR Streaming
NVIDIA launched the SHIELD Android TV back in the spring of 2015 as a quirky combination of lightweight Android gaming console and Android TV set-top-box, and the device quickly earned recognition as the most powerful and capable Android TV device on the market. Thanks to the versatility of the Tegra X1 SoC in general and its Maxwell display controller in particular, NVIDIA launched with a box capable of displaying 4Kp60 content, and managed to further improve feature-set of its STB over the last two years. In particular, NVIDIA added HDR display and streaming support, and was among the first devices to be certified for 4Kp60 streaming from Netflix and other over-the-top streaming services. While the original SHIELD Android TV still has a lot of potential, some of the things are being introduced with a new model, which is what SHIELD TV is all about.
From a hardware specifications point of view, the new SHIELD TV is essentially a cut-down version of the original 2015 model. The new device contains the same Tegra X1 SoC, similar RAM/storage configuration, and much the same I/O, etc. What NVIDIA has done away with is the 2.5" HDD bay (used in the Pro model), along with the microSD card slot and a micro-USB 2.0 port. Instead the device's expandability and connectivity is delivered through the use of two USB 3.0 ports, along with Gigabit Ethernet and 2x2 802.11ac WiFi. As a result, the new SHIELD TV is considerably smaller and lighter than the original SHIELD Android TV thanks to the space savings (especially removing the HDD bay).
Another notable difference between the 2017 SHIELD TV and the 2015 SHIELD Android TV packages is the new gamepad controller, which loses some weight, ditches touchpad, but gains a microphone for the Google Assistant. The latter allows finding content, control playback and locating other information using voice commands. Technically, the original SHIELD Android TV also has a microphone on the remote, but it needs to manual activation, whereas the one on the new controller is “always on.” In fact, the new SHIELD controller can be bought separately to add hands-free commands to NVIDIA's SHIELD Android TV.
NVIDIA SHIELD STB Family SHIELD TV
SHIELD TV Pro
SHIELD Android TV
SoC Tegra X1 (4 × Cortex A57 + 4 × Cortex A53, Maxwell 2 SMM GPU) RAM 3 GB LPDDR4-3200 Storage 16 GB NAND
16 GB NAND
500 GB HDD
16 GB NAND
500 GB HDD (Pro only)
Display Connectivity HDMI 2.0b with HDCP 2.2 (4Kp60, HDR) Dimensions Height 98 mm
Width 159 mm
Depth 26 mm
Weight 250 grams 654 grams Power Adapter 40 W I/O Wireless 2x2 802.11a/b/g/n/ac
USB 2 × USB 3.0 2 × USB 3.0
1 × micro-USB 2.0
IR IR Receiver Ethernet Gigabit Ethernet Launch Product Bundle Shield Controller
Shield Controller Launch Price $199.99 $299.99 Basic: $199.99
SmartThings Hub ($99 when sold separately), and when combined with an appropriate radio dongle, can communicate with compatible devices (such as Nest) using Zigbee and Z-Wave communication protocols.
Meanwhile, later this year NVIDIA plans to release its Spot device. The $50 Spot is a wireless microphone and a speaker that can be plugged in any power outlet within a home, relaying voice commands to SHIELD TV. The Spot is meant to further improve the usefulness of the SHIELD TV as a smart home hub by expanding the range over which it can hear commands for Google Assistant and compatible SmartThings devices.
The new SHIELD TV STBs will come with NVIDIA's remote and the new gamepad by default. NVIDIA will ship the new SHIELD TV STB on January 16 and the systems are already available for pre-order at $199.99. The SHIELD TV Pro model with 500 GB HDD is priced at $299.99 and will ship on January 30.
Buy NVIDIA SHIELD TV (2017) on Amazon.com
Gallery: NVIDIA Launches SHIELD TV: Smart Home Functionality, 4K HDR Game Streaming
- The NVIDIA SHIELD Android TV Review: A Premium 4K Set Top Box
- NVIDIA Announces SHIELD Console: Tegra X1 Android TV Box Shipping In May
- NVIDIA SHIELD Android TV Console Adds Support for Vudu, HDR and 4Kp60 Content
- NVIDIA's GeForce NOW - GRID Cloud Gaming Service Goes the Subscription Way
- NVIDIA SHIELD Android TV OTA Update Improves HTPC Credentials
01-10-17, 04:51 PM #6623
Anandtech: GIGABYTE Quietly Launches Low Profile GeForce GTX 1050, 1050 Ti Graphics C
GIGABYTE’s GeForce GTX 1050 Ti OC Low Profile 4G and GeForce GTX 1050 OC Low Profile 2G graphics adapters are based on NVIDIA’s GP107 GPU (albeit, in different configurations) and carry 4 GB and 2 GB of GDDR5 memory running at 7 Gbps, respectively. Both cards use the same PCB (marked as V16156-0) as well as the dual-slot cooling system featuring an aluminum heatsink and a fan. As for connectivity, the boards also have a similar set of outputs: one DL-DVI-D, two HDMI 2.0b and one DisplayPort 1.4 with HDCP 2.2 support that is required for Ultra HD Blu-ray playback.
It seems like low profile graphics cards are back courtesy of NVIDIA’s GeForce GTX 1050-series as GIGABYTE is the second company to announce such parts after MSI, and it likely that these two companies will not be the only suppliers of such products. For those building low-power HTPCs or upgrading entry-level PCs, the GP107-based graphics adapters seem to be a good choice because the GPU supports DirectX 12 and Vulkan APIs as well as has an advanced media playback engine that features hardware-accelerated decoding and encoding of H.265 (HEVC) video.
Specifications of Low Profile GeForce GTX 1050-Series Graphics Cards GIGABYTE
1050 Ti OC
LP 2GMSIGTX 1050
SKU GV-N105TOC-4GL GV-N1050OC-2GL Stream Processors 768 640 Texture Units 48 40 ROPs 32 Core Clock (MHz) 1303 - 1328 1290 1366 - 1392 1354 Boost Clock (MHz) 1417 - 1442 1392 1468 - 1506 1455 Memory Capacity 4 GB 2 GB Type GDDR5 Clock 7 Gbps Bus Width 128 bit Outputs DisplayPort 1 × DP 1.4 DVI 1 × DVI-D HDMI 2.0b 2 1 2 1 TDP 75 W Launch Date 1/2017 11/2016 1/2017 11/2016
GIGABYTE does not specify MSRPs for its GeForce GTX 1050 Ti OC Low Profile 4G and GeForce GTX 1050 OC Low Profile 2G graphics adapters on its web-site, as these are typically determined at regional release. Given the positioning of these products, it unlikely that they will cost significantly more than NVIDIA’s MSRPs for similar video cards: $139 for the GTX 1050 Ti and $109 for the GTX 1050.
Gallery: GIGABYTE Quietly Launches Low-Profile GeForce GTX 1050, 1050 Ti Graphics Cards
- MSI Adds Low-Profile GeForce GTX 1050 Ti to Lineup
- NVIDIA Announces GeForce GTX 1050 Ti & GTX 1050: Entry-Level Cards Launching October 25th
- MSI Shows New Radeon RX 480 Gaming Cards, with an 8-pin
- NVIDIA Releases GeForce GTX 1060 3GB: GTX 1060, Yet Not
01-11-17, 07:13 AM #6624
Anandtech: AMD Set to Launch Ryzen Before March 3rd, Meeting Q1 Target
Join AMD Game Engineering team members for an introduction to the recently-launched AMD Ryzen CPU followed by advanced optimization topics.
01-11-17, 09:54 AM #6625
Anandtech: Corsair’s Bulldog 2.0 Gets Kaby Lake-Compatible Z270 Motherboard, New Cool
Corsair’s Bulldog case blends enthusiast-class performance and features with a living room aesthetics, which is a rather rare combination. The Bulldog chassis can accommodate a mini-ITX motherboard, a full-height graphics card (which is not longer than 300 mm and is not thicker than 90 mm), two liquid cooling systems, a 3.5” HDD, up to three 2.5” storage devices, multiple fans as well as an SFX power supply. At CES Corsair demonstrated its new Bulldog 2.0 barebones kit featuring MSI’s Z270I Gaming Pro Carbon AC motherboard, its own new Hydro H6 SF low-profile cooler as well as the SF600 PSU.
initial product, the key differentiators of the new barebones kit is the mainboard and the LCS. The latter is not yet available separately and the company even has not published its specs. The only thing that Corsair says about the H6 SF is that it is quiet even when it has to cool down an overclocked CPU, which is not really a detailed description. As for the Z270I Gaming Pro Carbon AC motherboard, it comes with an LGA1151 socket supporting both Kaby Lake-S and Skylake-S processors, two DIMM slots for up to 32 GB of DDR4 memory, a PCIe 3.0 x4/NVMe M.2-2280 slot for SSDs and a PCIe x16 slot for graphics cards. The motherboard is equipped with the new ASMedia ASM2142 controller (uses PCIe 3.0 x2 interface and thus provides up to 16 Gbps of bandwidth to two USB 3.1 Gen 2 ports) powering USB 3.1 Gen 2 Type-A/C headers, Intel’s dual-band Wireless-AC 8265 module (Wi-Fi 802.11ac + BT 4.2), Intel’s I219-V Gigabit Ethernet controller, a 7.1-channel Realtek ALC1220-based audio sub-system, SATA connectors and so on.
The motherboard looks to be more advanced than the one installed into the first-gen Bulldog as it is based on the latest Intel Z270 PCH, supports Optane Memory caching, a newer audio codec and an improved USB 3.1 (10 Gbps) controller. If the H6 SF LCS is really quieter than the predecessor, then the Bulldog 2.0 has a nice set of improvements over the first version.
Corsair Bulldog 2.0 Barebones Kit: Quick Specs Motherboard MSI Z270I Gaming Pro Carbon AC CPU Cooler Corsair Hydro H6 SF PSU Corsair SF600 (600 W 80 Plus) Dimensions (W×H×D) 457 mm × 133 mm × 381 mm Weight 5 kilograms Motherboard Form-Factor Mini-ITX PSU Form-Factor SFX 3.5" Drive Bays 1 2.5" Drive Bays 1 if 3.5" drive is installed
3 if 3.5" bay is unused
System Fans 2 × 92 mm (included)
1 × 120 mm
CPU Cooler Dimensions Up to 90 mm in height Graphics Card Length 300 mm PSU Length 130 mm External Connectors Power, Audio, USB 3.0, USB 3.1, Display, etc
Gallery: Corsair’s Bulldog 2.0 Gets Kaby Lake-Compatible Z270 Motherboard, New Cooler
- ASUS VivoPC X: Core i5, GeForce GTX 1060, 512 GB SSD, 5-Liter Chassis, $799
- GIGABYTE's New Console: The 'Gaming GT' PC Launched with Core i7-K, GTX1080, TB3
- Zotac ZBOX MAGNUS EN1080 SFF PC Review: A Premium Gaming Powerhouse
01-11-17, 11:00 AM #6626
Anandtech: SK Hynix Announces 8 GB LPDDR4X-4266 DRAM Packages
The LPDDR4X is a new mobile DRAM standard that is an extension of the original LPDDR4, and is expected to reduce power consumption of the DRAM sub-system by 18~20% according to developers (everything else remains the same: a 200~266 MHz internal memory array frequency, 16n prefetch, etc.). To do that, LPDDR4X cuts output driver power (I/O VDDQ voltage) by 45%, from 1.1 V to 0.6 V. LPDDR4X is supported by a number of mobile SoC developers. The first application processor to support the new type of memory is MediaTek’s Helio P20 that was announced nearly a year ago and the initial devices powered by the chip are likely to hit the market in 1H 2017. Another notable SoC to support LPDDR4X is Qualcomm’s new flagship Snapdragon 835, which was announced in November and detailed earlier this month. Smartphones featuring this chip will not show up for a while, but MWC just around the corner which lends nicely to various handset announcements.
SK Hynix did not announce exact power consumption figures for its LP4X parts, but confirmed that the reduction of I/O voltage by 45% reduces power consumption of the whole memory sub-system by around 20% versus a hypothetical LPDDR4 memory sub-system running at the frequency in the same conditions. This is not exactly a good description because SK Hynix’s LPDDR4 offerings top at 3733 MT/s. Assuming that the manufacturer did not optimize the design of its LPDDR4X DRAM arrays to reduce power consumption, but only reduced VDDQ to 0.6 V, a memory sub-system based on the new 8 GB LP4X-4266 part should consume less than a similar sub-system running the company’s 8 GB LP4-3733 stack, but the exact figure is unknown.
SK Hynix 8 GB LPDDR4X DRAM Packages H9HKNNNFBUMU
H9HKNNNFBMMUDR H9HKNNNEBMMUER H9HKNNNEBMAUDR DRAM IC Capacity 16 Gb 12 Gb Number of DRAM ICs 4 Package Capacity 64 Gb (8 GB) 48 Gb (6 GB) Data Rate 4266 MT/s 3733 MT/s Bus Width x64 Bandwidth 34.1 GB/s 29.8 GB/s 29.8 GB/s Package FBGA FBGA-376 FBGA-366 FBGA-376 Dimensions 12 mm × 12.7 mm Voltages 1.8V / 1.1V / 0.6V Process Technology 21 nm Availability 2017
Initially, SK Hynix will offer only 8 GB LPDDR4X packages with 4266 MT/s data transfer rate based on its 16 Gb DRAM ICs. Eventually, the company intends to expand the lineup with 6 GB/8 GB LPDDR4X-3733 (these are already listed in the company's Q1 databook) and LPDDR4X-3200 solutions as well as parts based on 8 Gb LPDDR4X ICs (these are not listed in the official documents, but are mentioned in the company's official blog post). The latter make a lot of sense as far not all mobile are going to use 8 GB of DRAM this year. SK Hynix quotes researchers from IHS Markit, who believe that an average high-end smartphone this year is going to integrate 3.5 GB of memory on average (a mix of 3GB, 4GB, 6GB and 8GB solutions on Android). Meanwhile, keep in mind that DRAM requirements for Apple’s iOS and Google’s Android are different, which is why smartphones running the latter need more memory and handsets featuring 4 GB of Mobile DRAM are going to become mainstream in 2017. By contrast, Apple’s iPhone 7 and iPhone 7 Plus have 2 GB and 3 GB of DRAM, respectively.
SK Hynix said that its 8 GB LPDDR4X-4266 packages are already in mass production. Mobile devices based on the new memory are expected to arrive in the coming months and it is highly likely that select manufacturers may demonstrate their MediaTek Helio P20- and LPDDR4X-based products at MWC next month.
- MediaTek Announces New Helio P20
- Qualcomm Details Snapdragon 835: Kryo 280 CPU, Adreno 540 GPU, X16 LTE
- Hot Chips 2016: Memory Vendors Discuss Ideas for Future Memory Tech - DDR5, Cheap HBM, & More
- SK Hynix Updates Lineup: 8 GB LPDDR4 DRAM Packages for Mobile Devices
01-11-17, 12:10 PM #6627
Anandtech: Intel Compute Card: A Universal Compute Form-Factor for Different Kinds of
Computing has become so ubiquitous nowadays that almost every more or less sophisticated piece of hardware has a microprocessor inside. Many such devices are designed to operate for years, but, since computer chips inside them get outdated, it is almost impossible to upgrade their functionality (e.g., add new security capabilities, speed up processing, etc.) without replacing the whole unit, or a significant part of it. Alternatively, if a CPU or a memory IC fails, repair of such a device may cost a lot in terms of money and effort. This translates to downtime and lost revenue. Such things happen because of various reasons, but the main two are proprietary platforms with tight integration and no upgrade path, or complex architectures that do not allow for a quick replacement of faulty components. The list of such devices includes everything from business PCs to point-of-sale kiosks and from smart TVs to commercial equipment.
From a technology standpoint, Intel’s Compute Card resembles the company’s Compute Stick PC. However, its purpose is much wider: it is a small device that packs an Intel SoC or SiP (including Kaby Lake-based Core processors with vPro and other technologies), DRAM, NAND flash storage, a wireless module and so on into a small enclosure. Nonetheless, there are a number of important differences between the Compute Card and the Compute Stick. The Compute Card is a sealed system with “flexible I/O” in the form of a USB Type-C and an extension connector. The “flexible I/O” is not Thunderbolt (obviously, due to power consumption concerns), but it handles USB, PCIe, HDMI, DisplayPort connectivity and has some extra pins for future/proprietary use.
Intel Compute Card at Glance CPU Various Intel SoCs / SiPs, including Intel Core with vPro (up to 6W TDP) DRAM, NAND Integrated Cooling Fanless, but, I/O docks may have their own thermal design Dimensions 94.5 mm × 55 mm × 5 mm I/O Physical USB-C + Extension Logical USB, PCIe, HDMI, DP and additional signals Wireless Wi-Fi, Bluetooth Docking Integrated locking mechanism Launch Partners Dell, HP, Lenovo, Sharp and local companies Initial Availability Mid-2017
The other interesting aspect here is the future of the Compute Stick form factor. Given that the ARM-based HDMI sticks are not a popular form factor any more, it is not surprising that Intel has also not decided to update the Compute Stick lineup with Kaby Lake. Intel indicated that they would be evaluating the future of the Compute Stick in 2018, and decide if it warrants an update with the latest processors at that time. Our opinion is that the Compute Stick form factor has reached the end of its life, and it is for the Compute Card to carry on the miniaturization revolution. The Compute Card has much more flexibility in terms of the differentiation from the vendors' side, and it is not encumbered by an active cooling mechanism. Obviously, the ability to just plug the device into a HDMI port is not there, but, the Compute Card, by itself, is light enough to just hang directly off a display's HDMI port. Therefore, it is possible that some vendors can deliver a Compute Stick-like platform with the Compute Card also (albeit, with a slightly different form factor).
We expect to hear more about Compute Card related projects in Q3, either during Computex or Intel's Developer Forum.
01-11-17, 02:26 PM #6628
Anandtech: GIGABYTE Exhibits an Aquantia AQC107 based 10G Ethernet PCIe Card
The GIGABYTE solution is a PCIe 3.0 card featuring a single 10G port, which allows for half-height and full-height PCIe slots. Only an AQC107 version was at the show for 10G, and it wasn’t clear if a 2.5G/5G version using the AQC108 would be inbound, but at this point in time GIGABYTE is keeping its cards close to its chest.
I’ve told GIGABYTE that when the cards are available, I will take a few for testing. I’m slowly building up a sizable stack of 10GBase-T controllers, and we might start looking into relevant networking tests for them for comparisons. Any suggestions, please let us know.
Gallery: GIGABYTE Exhibits an Aquantia AQC107 based 10G Ethernet PCIe Card
01-11-17, 04:32 PM #6629
Anandtech: CES 2017: GIGABYTE Shows Passive Apollo Lake BRIX in Embedded UCFF
01-12-17, 04:01 AM #6630
Anandtech: HTC Announces New Phones For U: HTC U Play and HTC U Ultra
Gallery: HTC U Ultra: Colors
Both phones have a single-piece volume rocker and highly textured power button on the right edge. There’s also a single downward-firing speaker and a microphone on the bottom edge. Unlike most phones that use a symmetrical pattern of slits to conceal these components, the HTC U phones just use a small circular hole for the microphone. Neither phone includes a 3.5mm headphone jack, instead using the USB Type-C port for audio.
generally produces high-quality images and uses the Sony IMX377 Exmor R sensor, the U Ultra is likely using the Sony IMX378 Exmor RS sensor that’s found in Google’s Pixel phones, because it includes an upgraded hybrid autofocus system that combines PDAF with laser AF, which the IMX377 sensor does not support, to improve performance over a broader range of lighting conditions. The U Ultra’s rear camera has OIS to help remove hand shake during long-exposure shots and is paired with a larger aperture f/1.8 lens that should help it capture more light. The U Ultra’s rear camera module is square and sticks out farther than the U Play’s camera, which is likely a consequence of using a lens with a larger aperture.
The U Play’s MediaTek Helio P10 SoC includes eight Cortex-A53 CPUs and a Mali-T860MP2 GPU for limited gaming. The U Ultra steps up to a Snapdragon 821 SoC; however, like Google’s Pixel phones, it uses the same peak CPU frequencies as the lower-clocked Snapdragon 820. It’s a curious design choice, but it should not have a noticeable impact on everyday performance.
HTC U Play HTC U Ultra SoC MediaTek Helio P10
4x Cortex-A53 @ 2.0GHz
4x Cortex-A53 @ 1.1GHz
Qualcomm Snapdragon 821
(MSM8996 Pro AB)
2x Kryo @ 2.15GHz
2x Kryo @ 1.59GHz
RAM 3GB / 4GB LPDDR3 4GB LPDDR4 NAND 32GB / 64GB (eMMC 5.1)
+ microSD (SDXC)
64GB / 128GB (UFS 2.0)
+ microSD (SDXC)
Display 5.2-inch 1920x1080 IPS LCD 5.7-inch 2560x1440 IPS LCD
2.0-inch 160x1040 IPS LCD
Dimensions 145.99 x 72.9 x 3.50-7.99 mm
162.41 x 79.79 x 3.60-7.99 mm
Modem MediaTek (Integrated)
2G / 3G / 4G LTE (Category 6)
FDD-LTE / TD-LTE / WCDMA / GSM
Qualcomm X12 LTE (Integrated)
2G / 3G / 4G LTE (Category 12)
FDD-LTE / TD-LTE / WCDMA / GSM
SIM Size 1x / 2x NanoSIM 1x / 2x NanoSIM Front Camera 16MP, UltraPixel, f/2.0, 28mm focal length, Auto HDR 16MP, UltraPixel, Auto HDR Rear Camera 16MP, f/2.0, 28mm focal length, PDAF, OIS, Auto HDR, dual-tone LED flash 12MP, 1.55µm pixels, f/1.8, PDAF + Laser AF, OIS, Auto HDR, dual-tone LED flash Battery 2500 mAh
Wireless 802.11a/b/g/n/ac, BT 4.2, NFC, GPS/GNSS 802.11a/b/g/n/ac, BT 4.2, NFC, GPS/GNSS/Beidou Connectivity USB 2.0 Type-C USB 3.1 Type-C Launch OS Android 7.0 with HTC Sense Android 7.0 with HTC Sense
I would definitely like to see larger batteries in both phones. Inside the U Play’s 3.50-7.99 mm thick chassis is a 2500 mAh battery. For comparison, the smaller 5.0-inch Google Pixel packs a 2770 mAh battery into a 7.3-8.5 mm thick chassis, and the 5.1-inch Samsung Galaxy S7 packs a 3000 mAh battery in its 7.9 mm thick chassis. It’s a similar story for the U Ultra, whose 3000 mAh battery inside a 3.60-7.99 mm thick chassis looks small compared to the 3450 mAh battery in the 0.5 mm thicker but smaller 5.5-inch Pixel XL, or the 3600 mAh battery in the smaller (5.5-inch) and thinner (7.7 mm) Galaxy S7 edge. The U Ultra does include Qualcomm’s Quick Charge 3.0 for rapid charging, while the U Play supports 5V/2A charging.
The HTC U Play will be available in select markets around the globe in Q1 2017, with options for either 3GB or 4GB of RAM and 32GB or 64GB of internal storage. Meanwhile, the HTC U Ultra, which comes with either 64GB of internal storage and Gorilla Glass 5 covering the front or 128GB of storage and harder sapphire glass, is available for pre-order in the US January 12, exclusively at htc.com. HTC did not reveal pricing information yet.
Users Browsing this Thread
There are currently 2 users browsing this thread. (0 members and 2 guests)