Page 504 of 755 FirstFirst ... 4404454479494499500501502503504505506507508509514529554604 ... LastLast
Results 5,031 to 5,040 of 7550

Thread: Anandtech News

  1. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    29,643
    Post Thanks / Like
    #5031

    Anandtech: Google Introduces Google Photos

    Today during their I/O 2015 Keynote Google announced Google Photos, a new service that will allow users to easily upload, edit, and share photographs from all of their devices. Google Photos seems like a reboot of Google's previous efforts for making a photo sharing service through Google+, and is instead a standalone product with its own dedicated apps and website.
    Google's goal is that Google Photos will become a place where users can permanently store a continually growing collection of photos from their cameras and mobile devices. They also hope to improve upon the organization and sharing of photos, which has become a difficult problem to tackle with people taking and sharing more photos than ever.
    The big promise of Google Photos is that the storage for your photos will be unlimited. This is a huge step above the measly 5GB of iCloud storage you can use with Apple's Photos offerings, and still an improvement over services like Flickr which offer users 1TB of storage. However, there is a caveat to the unlimited storage. While you aren't limited by the amount of photos or videos you can store, you are somewhat limited by their quality. Users who opt for unlimited storage can only store images at up to 16MP, and videos at 1080p. This shouldn't really affect any users who intend to use the service for storing photos from their smartphone, as most smartphone cameras have lower resolutions than 16MP.
    There is another option for users who want to use Google Photos for their high resolution pictures from their DSLR or mirrorless camera, or who just have a very high resolution smartphone. Users can opt to have Google Photos use their Google Account's 15GB of storage, and with this option there are no limits to file size or resolution. Since many devices give away 1TB of free Google Drive storage, I suspect that this option will be popular among users who want to keep their photos in the highest possible resolution while staying within Google's ecosystem instead of going with another photo offering like Flickr.
    The second half of Google Photos is how it will intelligently organize your photos. Google can analyze the content of photos and group them into categories based on their subject. While I haven't seen this in action, if it does actually work as well as Google claims then it would remove much of the hassle involved with organizing your photo collection.
    Google Photos also allows for groups of photos to be shared. You can share a link to one of Google's automatically created groups, or you can make a collection of photos and get a single link to share them all at once. There's no need for the person you're sharing them with to have a Google+ account or to have the app installed.
    In addition to grouping and sharing, Google Photos has all the other features that one would expect from a photos app such as simple color adjustments, cropping, and other editing controls. Google Photos will be available today across essentially all major platforms, with apps available for iOS, Windows, OS X, and an update to the existing Photos app coming on Android.


    More...

  2. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    29,643
    Post Thanks / Like
    #5032

    Anandtech: Google Announces Project Brillo and Weave for IoT

    As part of today's announcements at I/O 2015, Google announced a new operating system and API targeted at the Internet-of-Things (IoT) space.
    Starting off with the new OS, under the codename of Project Brillo, Google promises a very stripped down Android-derivative operating system. Google explains that it keeps the lower-level components of Android such as the specific Linux kernel modifications and hardware abstraction layers. Device, and most importantly SoC manufacturers can reuse and continue basing their software stacks on the Android frameworks that have been standardized in the mobile space. Project Brillo is meant to offer a versatile OS with minimal system requirements for IoT devices such as for example your thermostat or light switch.
    To make use and interconnect the IoT ecosystem, Google also announces WEAVE, which is an API framework meant to standardize communications between all these devices. We still don't have much technical information yet, but from code-snippets given in the presentation it looks like a straightforward simple and descriptive syntax standard in JSON format. WEAVE is a platform-agnostic API that can be implemented by any vendor and developer if they wish to do so.
    Google plans to release the developer preview of Project Brillo in Q3 while the WEAVE API standard is supposed to see its publication in Q4 later in the year.


    More...

  3. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    29,643
    Post Thanks / Like
    #5033

    Anandtech: AMD Releases Catalyst 15.5 Beta Drivers

    AMD has pushed out another Catalyst beta driver update this afternoon, bringing their latest drivers up to Catalyst 15.5 beta. The release notes for this driver indicate that it’s specifically focused on two recently released games – Project Cars and The Witcher 3 – offering significant performance improvements for both games, along with a new Crossfire profile for The Witcher 3.
    The internal build number on these drivers is 14.502.1014.1001, so for anyone counting branches these new drivers are an incremental update over the 15.4 betas, and at this point I’m not expecting any differences in games other than the two games AMD is specifically targeting.
    Finally, as always, you can grab the drivers for all current desktop, mobile, and integrated AMD GPUs over at AMD’s Catalyst beta download page.


    More...

  4. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    29,643
    Post Thanks / Like
    #5034

    Anandtech: Avago Acquires Broadcom Corperation For $37 Billion

    Today Avago Technologies announced their acquisition of Broadcom for 17 billion dollars in cash and 20 billion dollars worth of Avago shares. This will leave Broadcom shareholders with 32% of the new combined company, and the combined 37 billion dollar value of the deal makes it one of the largest ever acquisitions in the history of the semiconductor industry.
    Avago is a semiconductor company that produces a number of different products. They make PCIe switches for motherboards, and at one point were the owner of SandForce and their SSD controllers before they were sold to Seagate. Some of their products also cross over with Broadcom’s focus on the wireless industry, such as their power amplifiers for mobile radios. Broadcom is a company that we encounter frequently in the mobile space, as they make many different chips for WiFi and GNSS. If you have a smartphone, it’s very likely that something in it was made by Broadcom. They also produce many of the chips used in networking equipment for data centers.
    As for why Avago would seek to acquire Broadcom, it could be that both Avago and Broadcom hope to combine their product portfolios in order to compete with Intel and Qualcomm in the areas of wired and wireless networking. The combined company after the acquisition will be based in Avago’s home of Singapore, and will continue to be called Broadcom.
    Source: Reuters


    More...

  5. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    29,643
    Post Thanks / Like
    #5035

    Anandtech: ASRock at Computex 2015: Pre-Show PR gives Z170 Motherboards and 4T4R Rout

    Next week is the annual Computex trade show and we have a substantial number of meetings booked, but as part of the regular pre-show ritual, companies are coming at us with the start of their press release mêlée. One of the first to officially release their embargo is ASRock, showing off some impressive equipment ready for media to gawk at when we hit the booth on the show floor.
    First up is a tantalising teaser of what is to come. Anyone interested in the PC space is talking about upcoming Broadwell and the iteration after that called Skylake. Skylake for desktops will require a new chipset and new motherboard, which we at least know that Z170 is part of it (H, B and Q series motherboards are likely in the scheme as well). A big part of Computex in recent years has been showing off these designs regardless of the launch window, and ASRock’s PR today mentions two such Z170 motherboards: the Z170 Gaming K6 and the Z170 Extreme7.
    The Z170 Gaming K6 throws up some interesting talking points. We have an ASRock gaming logo on the chipset, which is supposed to be akin to a praying mantis and will most likely supplant the Fatal1ty branding on the gaming range. The new socket looks similar to the one used for Z87 and Z97. The PCIe slots are split electrically x16/x8/x4 with an Ultra M.2 in the middle suggesting a PCIe 3.0 x4 M.2 slot. Killer networking returns on this platform, and it would seem that SATA Express is also along this line. In the top left, you’ll notice the DRAM slots are listed as DDR4_A1, DDR4_A2 and so on, with single sided latches supporting the DDR4 modules.
    The Z170 Extreme7 images are more exciting, showing off three M.2 slots between the PCIe slots. These are all listed as Ultra M.2, which means PCIe 3.0 x4 bandwidth each for 12 lanes. At this point details of the Z170 platform have not been released, but having access to three M.2 x4 slots either means that some can only be used when integrated graphics is in play, the CPU has more than 16 lanes, or some of these are running off the chipset, none of which can be confirmed. Both the Extreme7 and the Gaming K6 would seem to have Purity Sound 3, the next iteration of the upgraded motherboard audio. This should be the Realtek ALC1150 still, however that is not confirmed as of yet.
    Another surprising element to the press release was the announcement that ASRock’s Gaming brand is expanding beyond motherboards. Similar to other gaming brands from motherboard companies, ASRock will also provide mice and mousepads (no mention of keyboards or headsets), but in an interesting twist they will also provide a router. The G10 is meant to be a similarly themed (with the logo and the angled edges) device but offers 4T4R connectivity on 802.11ac. This means up to 1733 Mbps connectivity over a single WiFi application. The only critical point here is that no-one sells a 4T4R WiFi card for a PC – the most we’ve seen so far is 3T3R in commercial applications. It will be interesting to see if that leads down a certain path of better WiFi bandwidth opportunities.
    We have plans to meet with ASRock during Computex where we hope to get some hands-on time with this stuff. Release dates and pricing are not being announced as of yet.
    Gallery: ASRock at Computex 2015: Pre-Show PR gives Z170 Motherboards and 4T4R Router



    More...

  6. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    29,643
    Post Thanks / Like
    #5036

    Anandtech: A Look At The Changes In The Android M Preview

    Today during Google I/O 2015's opening keynote Google announced the latest release of their Android operating system. The new version of Android is using the code name Android M, much like how Lollipop was initially referred to as Android L. While Android Lollipop was a major release with many new features and a comprehensive redesign, Android M goes back to basics, and focuses on polishing what Google introduced with Lollipop. That isn't to say that there are no new features or design tweaks, but they are not on the same scale as Lollipop's changes. It may be best to think of Android M being to Lollipop what Jellybean was to Ice Cream Sandwich.
    When Google originally announced Android L at Google I/O last year, they released a developer preview so developers could prepare their apps for the new operating system, as well as for both developers and Android enthusiasts to provide feedback about the changes that were being made. Google has done the same thing with Android M, and they have committed to releasing over-the-air updates to the developer preview on a monthly basis.
    While I'm hopeful that the final name of this next Android release will be Android Muffin, we won't know details about the name of the new OS for quite some time. Opening the Android M Easter egg in the About phone section of Settings shows you the new Android M logo which you can see in the cover image of this article. Long pressing on that logo kicks you out of that section and presents you with the increasingly common Tsu (

    More...

  7. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    29,643
    Post Thanks / Like
    #5037

    Anandtech: NVIDIA Introduces AndroidWorks For Android Game Developers

    Today NVIDIA is expanding their GameWorks developer program to the realm of Android devices. GameWorks encompasses a range of NVIDIA technologies and tools like PhysX, VisualFX, OptiX, and the NVIDIA Core SDK which allows developers to program for NVIDIA GPUs using NVAPI instead of APIs like DirectX or OpenGL. It also includes many tools to help developers test and debug their games.
    AndroidWorks aims to simplify the experience of developing games on Android. It includes a number of libraries for developers to use, along with sample code. It also includes a number of tools for profiling performance and debugging. While AndroidWorks is based on NVIDIA's existing Tegra Android Developer Pack, it is not limited to being used on NVIDIA devices. NVIDIA has tested AndroidWorks programs on a number of devices, including the x86 based ASUS Memopad and Google Nexus Player, as well as other ARM devices like the Nexus 7 and the Galaxy Tab S.
    To improve on the native development experience offered by the Android SDK and NDK, the tools and SDK included with AndroidWorks integrate with Microsoft's Visual Studio IDE on Windows. NVIDIA plans to provide frequent updates to AndroidWorks, and they hope that it will become the tool of choice for game developers targeting Android.


    More...

  8. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    29,643
    Post Thanks / Like
    #5038

    Anandtech: Google and Qualcomm Partner To Make A Project Tango Smartphone

    Project Tango is the name of Google's initiative to build smartphones and tablets with cameras and sensors that can be used to track its position in 3D space, and to map the environment around it. Although it was initially announced as a platform that would the reference kit that was eventually put on sale for prospective developers was a 7" tablet which uses NVIDIA's Tegra K1 SoC. That will be changing soon, with an announcement from Qualcomm and Google that a Project Tango smartphone is current being developed.
    While there aren't many details as of yet, an image from Engadget shows that the phone will have the necessary array of cameras for capturing information the information about the phone's environment. Naturally, the phone will be powered by Qualcomm's Snapdragon 810 SoC. According to Qualcomm, this new Project Tango development platform will be available to developers during the third quarter of this year
    Source: PR Newswire via Engadget


    More...

  9. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    29,643
    Post Thanks / Like
    #5039

    Anandtech: Understanding Project Soli and Jacquard: Wearable Control Breakthroughs

    Today at Google IO, Google’s Advanced Technologies and Products (ATAP) group went over some of the things that they’ve worked on in the year since the previous IO. There are a few different things that they’ve worked on, but the most immediate and important announcements were centered on technologies to enable wearable technologies. These two announcements were Project Soli and Jacquard, both focused on solving the challenges involved with input on wearables. Although wearables are on the market today, one of the biggest problems with Android Wear and most wearables in general is that an almost purely touch-based interface means that almost any touch target has to take up a huge part of the display in order to make sure that it’s possible to reliably hit the touch target.
    Google’s ATAP group noticed that this limitation is partially due to processing limitations of the brain, which varies for different parts of the body. Something like the elbow has very little sensation and cannot be finely placed. ATAP’s research indicates that the fingers by comparison has 20 times the processing bit rate, and therefore can be used to provide precise input.
    For ATAP, the logical conclusion to this was Soli, a new single-chip solution for detecting finger motion and gestures. Soli is fundamentally a different approach to conventional solutions like capacitive touch screens which lack the ability to detect 3D motion and camera movement detection systems like Kinect which cannot accurately detect fine motion. The solution here is said to be 60 GHz radio, which has a sufficiently high frequency that it is relatively reflective with sufficient resolution to distinguish fine motion like a capacitive touch display. It was also said that the transmitter has a wide cone-like spread rather than scanning with a fine beam of radio waves, which helps to drive down cost. The current solution uses two transmitters and four receivers, which presumably helps with improved noise rejection and accurate reading of multiple fingers. Given the extremely high frequency, it’s also a pretty fair bet that we’re looking at a superheterodyne architecture but unfortunately I was unable to get a confirmation on this from ATAP representatives.
    With this hardware, and a lot of proprietary algorithms that leverage machine learning/deep neural nets, it becomes possible to track extremely fine gestures. Something as simple as tapping your index finger with your thumb could be equivalent to tapping a button, and swiping your thumb against your index finger could be equivalent to scrolling on a touch screen.
    Clearly, there’s a ton of possibilities with such technology, and probably the most obvious one is using this to enable a seamless experience with wearables like Google Glass, which suffers from poor user interface as it relied on head motion and touching the glasses itself to navigate the UI. ODG had an interesting finger-mounted controller, but it was obvious that it would be easily lost and was inherently somewhat clunky due to the number of controls that had to be integrated. The demos tended to focus on smartwatches, but my experience with the Apple watch suggests that the addition of Force Touch and the digital crown in addition to the touchscreen really resolves most of the problems that come with small display of smart watches. The real benefits really start to appear when in the context of other types of wearables.
    The other technology that ATAP unveiled was named Jacquard, which is a bit less impressive from a pure technology perspective but definitely interesting from a manufacturing perspective. In short, Jacquard is a way of weaving capacitive touch digitizers into cloth.
    According to Google, there were two major challenges in enabling Jacquard. The first was that pre-existing conductive yard had incredibly high resistance at 15 ohms per meter while 22 AWG copper wire has a resistance of .053 ohms per meter, or around 280 times higher, which makes it incredibly difficult to implement a good touch display of any significant size. In addition, there was only a single color. The other problem was that no one had attempted to try and deeply integrate so many wires in a fabric before, and the Ara team faced many challenges when it comes to connecting the wires to logic once they've been woven into the fabric to actually make the touch screen useful.
    To solve the problem with connecting a touch panel to the fabric, Google managed to get a special type of weaving done in order to expose the conductive strands of yarn in a specific order to allow fast and simple connection to connectors. In order to solve the problem of high-resistance yarn, ATAP made their own type of yarn to enable this technology with a low resistance that is only around twice as high as 22 AWG copper wire, and can be made in almost any color or material.
    Although there are effectively no details on the actual process, which is likely to be a trade secret, it is said that this cloth works without issue in pre-existing textile machines. To demonstrate this, they showed off a tailored jacket made with Jacquard touch-sensitive material to prove that this technology can seamlessly blend with fashion. The result is that it becomes possible to have much larger touch panels on wearable devices due to their effectively invisible nature. Given that Levis has already partnered with ATAP for this project, I suspect that this is closer to commercialization than Soli. Overall, these two projects clearly solve user experience problems in wearables, but it remains to be seen whether they can make these prototypes into mass-market products.


    More...

  10. RSS Bot FEED's Avatar
    Join Date
    09-07-07
    Posts
    29,643
    Post Thanks / Like
    #5040

    Anandtech: NVIDIA Launches Mobile G-Sync, Enables Windowed G-Sync, & More

    With Computex kicking off today NVIDIA has a number of announcements hitting the wire at the same time. The biggest news of course is the launch of the GeForce GTX 980 Ti, however the company is also releasing a number of G-Sync announcements today. This includes the launch of Mobile G-Sync for laptops, Windowed G-Sync support for laptops and desktops, new G-Sync framerate control functionality, and a number of new G-Sync desktop monitors.
    Mobile G-Sync

    We'll kick things off with the biggest of the G-Sync announcements, which is Mobile G-Sync. Today NVIDIA is announcing a very exciting product for notebook gamers. After much speculation (and an early prototype leak) NVIDIA’s G-Sync technology is now coming to notebooks.
    Anand took a look at the original G-Sync back in 2013 and for those that need a refresher on the technology, this would be a great place to start. But what G-Sync allows for is a variable refresh rate on the display which allows it to stay in sync with the GPU’s abilities to push out frames rather than forcing everything to work at a single fixed rate as dictated by the display.
    From a technical/implementation perspective, because desktop systems can be hooked to any monitor, desktop G-Sync originally required that NVIDIA implement a separate module - the G-Sync module - to be put into the display and to serve as an enhanced scaler. For a desktop monitor this is not a big deal, particularly since it was outright needed in 2013 when G-Sync was first introduced. However with laptops come new challenges and new technologies, and that means a lot of the implementation underpinnings are changing with the announcement of Mobile G-Sync today.
    With embedded DisplayPort (eDP) now being a common fixture in high-end notebooks these days, NVIDIA will be able to do away with the G-Sync module entirely and rely just on the variable timing and panel self-refresh functionality built in to current versions of eDP. eDP's variable timing functionality was of course the basis of desktop DisplayPort Adaptive-Sync (along with AMD's Freesync implementation), and while the technology is a bit different in laptops, the end result is quite similar. Which is to say that NVIDIA will be able to drive variable refresh laptops entirely with standardized eDP features, and will not be relying on proprietary features or hardware as they do with desktop G-Sync.
    Removing the G-Sync module offers a couple of implementation advantages. The first of these is power; even though the G-Sync module replaced a scaler, it was a large and relatively power-hungry device, which would make it a poor fit for laptops. The second advantage is that it allows G-Sync to be implemented against traditional, lower-cost laptop eDP scalers, which brings the price of the entire solution down. In fact for these reasons I would not be surprised to eventually see NVIDIA release a G-Sync 2.0 for desktops using just DisplayPort Adaptive-Sync (for qualified monitors only, of course), however NVIDIA obviously isn't talking about such a thing at this time. Laptops as compared to desktops do have the advantage of being a known, fixed platform, so there would be a few more issues to work out to bring something like this to desktops.
    Moving on, while the technical underpinnings have changed, what hasn't changed is how NVIDIA is approaching mobile G-Sync development. For laptops to be enabled for mobile G-Sync they must still undergo qualification from NVIDIA, and while NVIDIA doesn't release specific financial details, there is a fee for this process (and presumably per-unit royalties as well). Unfortunately NVIDIA also isn't commenting on what kind of price premium G-Sync enabled laptops will go for, though they tell us that they don't expect the premium to be dramatically different, if only because they think that all gaming laptops will want to have this feature.
    As far as qualification goes. the qualification process is designed to ensure a minimum level of overall quality in products that receive G-Sync branding, along with helping ODMs tune their notebooks for G-Sync. This process is something NVIDIA considers a trump-card of sorts for the technology, and something they believe delivers a better overall experience. From what we're hearing on quality, it sounds like NVIDIA is going to put their foot down on low quality panels, for example, so that the G-Sync brand and experience doesn't get attached to subpar laptops. Meanwhile the tuning process involves a similar process as on the desktop, with laptops and their respective components going through a profiling and optimization process to determine its refresh properties and pixel response times in order to set G-Sync timings and variable overdrive.
    Which on that note (and on a slight tangent), after initially staying mum on the issue in the early days of G-Sync (presumably as a trade secret), NVIDIA is now confirming that all G-Sync implementations (desktop and mobile) include support for variable overdrive. As implied by the name, variable overdrive involves adjusting the amount of overdrive applied to a pixel in order to make overdrive more compatible with variable refresh timings.
    As a quick refresher, the purpose of overdrive in an LCD is to decrease the pixel response time and resulting ghosting by overdriving pixels to get them to reach the desired color sooner. This is done by setting a pixel to a color intensity (voltage) above or below where you really want it to go, knowing that due to the response times of liquid crystals it will take more than 1 refresh interval for the pixel to reach that overdriven value. By driving a pixel harder and then stopping it on the next refresh, it's possible to reach a desired color sooner (or at least, something close to the desired color) than without overdrive.
    Overdrive has been a part of LCD displays for many years now, however the nature of overdrive has always implied a fixed refresh rate, as it's not possible to touch a pixel outside of a refresh window. This in turn leads to issues with variable refresh, as you don't know when the next refresh may happen. Ultimately there's no mathematically perfect solution here - you can't predict the future with 100% accuracy - so G-Sync variable overdrive is a best-effort attempt to predict when the next frame will arrive, and adjusting the overdrive values accordingly. The net result is that in motion it's going to result in a slight decrease in color accuracy versus using a fixed refresh rate due to errors in prediction, but it allows for an overall reduction in ghosting versus not running overdrive at all.
    But getting back to the subject at hand of mobile G-Sync, this is a big win for notebooks for a couple of reasons. First, more notebooks are sold now than desktops, so this makes G-Sync available to a bigger audience. Of course not all those devices even have GPUs, but NVIDIA has seen steady growth in the mobile GeForce segment over the last while, so the market is strong. The other reason this is important though is because mobile products are much more thermally constrained, as well as space constrained, so the mobile parts are always going to be slower than desktop parts. That gap has reduced with the latest Maxwell parts, but it is still there. G-Sync on mobile should help even more than it does on the desktop due to the lower overall framerate of laptop parts.
    But there is a catch, and it’s a big one.
    In order for G-Sync to be available on a laptop, a couple of things need to be true. First, the laptop must have a GeForce GPU obviously. Second, the laptop manufacturer needs to work with NVIDIA to enable this, since NVIDIA has to establish the parameters for the particular laptop panel in order to correctly know the maximum and minimum refresh rate as well as the amount of over/under-drive necessary. But the third is the big one. The laptop display must be directly connected to the GeForce GPU.
    What this means is that in order for G-Sync to be available, Optimus (NVIDIA’s ability to switch from the integrated CPU graphics to the discrete NVIDIA graphics) will not be available. They are, at least for now, mutually exclusive. As a refresher for Optimus, the integrated GPU is actually the one that is connected to the display, and when Optimus is enabled, the iGPU acts as an intermediary and is the display controller. The discreet GPU feeds through the iGPU and then to the display. Due to the necessity of the GPU being directly connected to the display, this means that Optimus enabled notebooks will not have G-Sync available.
    Obviously this is a big concern because Optimus is found on almost all notebooks that have GeForce GPUs, and has been one of the big drivers to reasonable battery life on gaming notebooks. However, going forward, it is likely that true gaming notebooks will drop this support in order to offer G-Sync, and more versatile devices which may use the GPU just once in a while, or for compute purposes, will likely keep it. There is going to be a trade-off that the ODM needs to consider. I asked specifically about this and NVIDIA feels that this is less of an issue than it was in the past because they have worked very hard on the idle power levels on Maxwell, but despite this there is likely going to be a hit to the battery life. Going forward this is something we'd like to test, so hopefully we'll be able to properly quantify the tradeoff in the future..
    As for release details, mobile G-Sync is going to be available starting in June with laptops from Gigabyte’s Aorus line, MSI, ASUS, and Clevo. Expect more soon though since this should be a killer feature on the less powerful laptops around.
    Wrapping things up, as I mentioned before, mobile G-Sync seems like a good solution to the often lower capabilities of gaming laptops and it should really bring G-Sync to many more people since a dedicated G-Sync capable monitor is not required. It really is a shame that it does not work with Optimus though since that has become the standard on NVIDIA based laptops. ODMs could use hardware multiplexer to get around this, which was the solution prior to Optimus, but due to the added cost and complexity needed my guess is that this will not be available on very many, if any, laptops which want to leverage G-Sync.
    Windowed Mode G-Sync

    The second major G-Sync announcement coming from NVIDIA today is that G-Sync is receiving windowed mode support, with that functionality being rolled into NVIDIA's latest drivers. Before now, running a game in Windowed mode could cause stutters and tearing because once you are in Windowed mode, the image being output is composited by the Desktop Window Manager (DWM) in Windows. Even though a game might be outputting 200 frames per second, DWM will only refresh the image with its own timings. The off-screen buffer for applications can be updated many times before DWM updates the actual image on the display.
    NVIDIA will now change this using their display driver, and when Windowed G-Sync is enabled, whichever window is the current active window will be the one that determines the refresh rate. That means if you have a game open, G-Sync can be leveraged to reduce screen tearing and stuttering, but if you then click on your email application, the refresh rate will switch back to whatever rate that application is using. Since this is not always going to be a perfect solution - without a fixed refresh rate, it's impossible to make every application perfectly line up with every other application - Windowed G-Sync can be enabled or disabled on a per-application basis, or just globally turned on or off.
    Meanwhile NVIDIA is also noting at this time that Windowed G-Sync is primarily for gaming applications, so movie viewers looking to get perfect timing in their windowed media players will be out of luck for the moment. The issue here isn’t actually with Windowed G-Sync, but rather current media players do not know about variable refresh technology and will always attempt to run at the desktop refresh rate. Once media players become Windowed G-Sync aware, it should be possible to have G-Sync work with media playback as well.
    G-Sync Max Refresh Rate Framerate Control (AKA G-Sync V-Sync)

    Third up on NVIDIA’s list of G-Sync announcements is support for controlling the behavior of G-Sync when framerates reach or exceed the refresh rate limit of a monitor. Previously, NVIDIA would cap the framerate at the refresh rate, essentially turning on v-sync in very high framerates. However with their latest update, NVIDIA is going to delegate that option to the user, allowing users to either enable or disable the framerate cap as they please.
    The tradeoff here is that capping the framerate ensures that no tearing occurs since there are only as many frames as there are refresh intervals, but it also introduces some input lag if frames are held back to be displayed rather than displayed immediately. NVIDIA previously opted for a tear-free experience, but now will let the user pick between tear-free operation or reducing input lag to the bare minimum. This is one area where NVIDIA’s G-Sync and AMD’s Freesync implementations have significantly differed – AMD was the first to allow the user to control this – so NVIDIA is going for feature parity with AMD in this case.
    New G-Sync Monitors

    Last but certainly not least from today’s G-Sync announcements, NVIDIA is announcing that their partners Acer and Asus are preparing several new G-Sync monitors for release this year. Most notably, both will be releasing 34” 3440x1440 ultra-wide monitors. Both displays are IPS based, with the Asus model topping out at 60Hz while the Acer model tops out at 75Hz. Meanwhile Acer will be releasing a second, 35” ultra-wide based on a VA panel and operating at a resolution of 2560x1080.
    Asus and Acer will also be releasing some additional traditional format monitors at 4K and 1440p. This includes some new 27”/28” 4K IPS monitors and a 27” 1440p IPS monitor that runs at 144Hz. All of these monitors are scheduled for release this year, however as they’re third party products NVIDIA is unable to give us a precise ETA. They’re hoping for a faster turnaround time than the first generation of G-Sync monitors, though how much faster remains to be seen.


    More...

Thread Information

Users Browsing this Thread

There are currently 1 users browsing this thread. (0 members and 1 guests)

Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •