Canon EOS M6 Mk2 review

It’s been interesting to watch Canon’s approach to the mirrorless market: from the early days its EOS M series cameras failed to impress us, while in 2018 it launched a brand new full-frame series, the EOS R. Learning from its early weaknesses in the M line-up, however, has seen the company evolve its offerings, with the M6 Mark II expanding its design for easier use.

That’s not all, though. For 2019 Canon is going all-in when it comes to resolution, with the M6 MkII packing in the same 32.5-megapixel CMOS sensor as the also-just-announced EOS 90D DSLR. That’s a whole lot of pixels. Is it a whole lot of success?

What’s new? M6 vs MkII

  • M6 II: 32.5 megapixel CMOS sensor / M6: 24.2MP
  • M6 II: Up to 11fps burst shooting  / M6: 7fps
  • M6 II: 4K video capture / M6: 1080p60 max
  • M6 II: Adds dual function dial & AF/MF switch
  • M6 II: Larger grip design

First, let’s wind back a little bit of time. In 2017 the original M6 arrived, being the first M series camera that we thought “Ok, this is almost a success”. That was high praise given how we couldn’t get on board with the EOS M5. As such, the Mark II M6 is a re-rub of the original’s design, with additional features. So what’s different?

Canon EOS M6 Mark II review image 5

The majority of changes are under the hood, with that high-resolution sensor leading the charge. That’s an almost 30 per cent boost compared to the original M6. It’s a new world as resolution is concerned, then, with Canon confident that it can relay quality while upping the count.

Thanks to a newer processor paired alongside – the Digic 8, which is one generation ahead of the Digic 7 in the original – the Mark II is also able to capture 4K video (the original maxed out at 1080p60). The newer camera doesn’t crop into the sensor either, so you get like-for-like ratios, i.e. a 50mm equivalent will produce the very same frame as it would for stills.

That new processor also brings added speed, with a 14fps burst shooting mode, even with autofocus activated. That’s double the rate of the first-gen model. As a point of comparison, the M6 Mark II is faster than the just-announced 90D (which is 11fps), showing that Canon is becoming less shy of allowing its mirrorless models to be ‘better’ than its DSLR equivalents.

Canon EOS M6 Mark II review image 2

Autofocus is Dual Pixel CMOS AF, which as we’ve seen from other Canon cameras is impressively quick – assuming it’s not too dark, anyway. That, we must say, is one area where the sensor-based system – despite claiming sensitivity to -5EV – can’t cut it compared to the viewfinder-based setup of the DSLR, such as the 90D.

Not all of the M6 II’s changes are invisible, though. There’s a much more pronounced grip to the front for a better hold, while two new dials have also appeared: a dual Dial Func button (where the exposure compensation used to be on the original) for doubling-up the controls, and an AF/MF switch to the rear for quick auto/manual focus adjustment.

Canon EOS M6 Mark II review image 10

This new design is more in character for more advanced users. It also points to the typical shortcomings of the EOS M’s former layout, where controls could feel too buried. It’s a welcome change, although we still find the need to press a button to, say, adapt the ISO sensitivity, a bit long-winded – especially compared to the EOS 90D.

Design & Performance

  • Dual Pixel CMOS AF autofocus for all modes
  • 5,481 positions for precise autofocus
  • Tilt-angle LCD screen, no viewfinder
  • Microphone input (1x 3.5mm port)
  • 14fps burst shooting max
  • Wi-Fi and Bluetooth

The M6 Mark II doesn’t feature a viewfinder, so it’s all about using it via the screen – well, unless you attach a finder accessory (sold separately). That screen is mounted on a movable bracket, so it can face forward for selfies, or at 45-degrees downward for waist-level use. It’s not a fully vari-angle screen, like in the 90D, but this design keeps everything nicely compact.

Canon EOS M6 Mark II review image 13

What’s best of all about the screen, however, is its touch-sensitivity. It’s responsive, with sensitivity options within the menu allowing for responsiveness adjustment to your preference, which is something other manufacturers ought to take into consideration. Either a tap on the screen or a press-and-drag will move the autofocus area with ease, making the M6 II about as easy to use as a smartphone.

However, the autofocus options are a little more restrictive than you’ll find elsewhere. Sure, the system is the same 45-point Dual Pixel AF setup as the original M6 – which delivers on-sensor phase detection autofocus paired with contrast-detect autofocus – but you only get a handful of focus options – for 1-point (at varying sizes), zone and tracking.

While these work fine – and there are almost 5,500 precision points – it’s just lacking the same complexity of its competitors. Something from the Panasonic G series betters it in every regard in our view.

As we touched upon above, dark conditions also confound this focus system. As we’ve tested the D90 and M6 II side-by-side in the same conditions, it’s clear that the DSLR’s viewfinder-based focus is the better of the two, more capable to latch onto focus in very dim conditions.

Canon EOS M6 Mark II review image 7

Some of this is lens dependent, however, so the better the glass on the front the more success you’ll see. The EF-M mount doesn’t have loads of lenses yet, but the 18-135mm we initially used was rarely capable of shooting in dim and contrasting conditions at any focal length. We switched for the 35mm f/1.4 and that was far better. So there’s a lesson to be learnt there: it’s not all about the body, the lens paired alongside is just as, if not more, important.

Image & Video Quality

  • All-new 32.5-megapixel CMOS sensor
  • 4K video (24/25/30fps)
  • Digic 8 processor
  • ISO 100-25,600

When it comes to image quality, it’s perhaps no surprise that megapixel counts are on the increase. Larger images give greater flexibility for large prints or for heavier cropping – the kind of things you can’t even nearly do with a phone camera (not that we’re realistically comparing the two).


We’ve largely been shooting in dim conditions with the M6 MkII, so any grain and apparent processing in our gallery of images is an inevitability given the all-four-figure ISO settings. In a sense it’s testament just how well this camera performs – assuming it can focus in such conditions in the first place, which was a bit of a battle.

The increase in resolution does dictate how you’ll need to handle the camera somewhat though. Beyond-30MP mark does mean that any tiny physical movements will be amplified in results. As such you’ll likely want to adapt for faster shutter speeds to ensure perfect crispness.

How grand will shots appear at the lowest ISO settings? Well, just as we said of the 90D, we don’t know yet. But we have usually high expectations. Canon is adept when it comes to realistic colour, smooth gradations and well-balanced exposures – and we’d expect no different here.

Canon EOS M6 Mark II review image 9

The other major part of the M6 Mark II puzzle is video. It can capture 4K at 24/25/30fps, or offers Full HD 1080p capture at up to 120fps. There’s even a 3.5mm microphone jack to cater for recording (but no headphones monitoring). In that sense this is a potential powerhouse on the video front, and a great sign that Canon is finally on board with Ultra-HD capture from its fuller range of consumer devices.

First Impressions

Canon has turned a corner in its M series mirrorless line-up, with the Mark II M6 adding welcome changes that make for even greater ease-of-use, while the resolution reaches epic new heights.

But those changes don’t make it perfect by any means. It’s not entirely easy to use, we find, while the autofocus – which is typically very fast – can struggle in low-light and high contrast conditions, especially with some of the more basic EF-M lenses attached.

Just as we said of its predecessor: “what will sell the M6 Mark II are two things – the brand name and the resulting image quality”. It might not be the best in class – it’s too easy to look at Panasonic’s G series instead – but those large-scale and high-quality EOS images are undoubtedly an attraction. And 4K video capture is no slouch either.

Videoproc VideoProc is a complete video processing toolbox for both Windows and Mac that can easily edit, resize, convert, enhance, stabilize & adjust any (4K) videos easily videos from GoPro, DJI, iPhone and any devices at fully GPU accelerated speed. Especially skilled at processing 4K videos with 30fps / 60 fps /120 fps /240 fps, large-sized videos and high speed videos shot with 120fps/240fps and slow-mo videos. Download a free licensed VideoProc and win GoPro /DJI Cameras by visiting “GoPro Studio”.

Armed with iOS 0days, hackers indiscriminately infected iPhones for two years

Armed with iOS 0days, hackers indiscriminately infected iPhones for two years

Hackers exploited more than a dozen iOS vulnerabilities—most of them unpatched zerodays—in a two-year campaign that stole photos, emails, log-in credentials, and more from iPhones and iPads, researchers from Google’s Project Zero said.

The attacks were waged from a small collection of hacked websites that used the exploits to indiscriminately attack every iOS device that visited. Attacks against 14 separate vulnerabilities were packaged into five separate exploit chains that gave the attackers the ability to compromise up-to-date devices over a period of more than two years. An analysis of the well-written exploit chains shows they were likely developed contemporaneously with the exploited iOS versions, which spanned from iOS iOS 10.0.1 released in September 2016 to 12.1.2 issued last December.

Real-time monitoring of entire populations

“I shan’t get into a discussion of whether these exploits cost $1 million, $2 million, or $20 million,” Project Zero researcher Ian Beer wrote in a deep-dive post analyzing the exploits and the malware they installed. “I will instead suggest that all of those price tags seem low for the capability to target and monitor the private activities of entire populations in real time.”

The post didn’t name or describe any of the hacked websites, other than to say they were estimated to “receive thousands of visitors per week.” Neither Project Zero nor Apple has offered any guidance to iOS users who want to know if they may have been infected. The installed malware, which is nearly impossible for most users to detect, can’t persist after a device reboot, so compromised phones are disinfected as soon as they’re restarted. Still, because the implant sent such a wide range of data to attacker-controlled servers, it may be possible for users of compromised devices to be monitored even after the malware is gone.

Researchers outside of Project Zero told Ars the quality and reliability of the exploits indicated they were written by developers with talent and experience. iOS is one of the hardest operating systems to compromise. The ability to combine non-public exploits in a way that “groomed” the highly fortified data structure known as the heap and bypassed other advanced protections was impressive, particularly when considering the span of two years the exploits were were collectively effective.

Significant effort

The 14 vulnerabilities comprised seven flaws in the Webkit package used by Safari, five bugs in the iOS kernel, and two flaws that escaped a browser sandbox that attempts to keep untrusted code from interacting with sensitive parts of the OS. At least one of the five chains was still a zeroday when Project Zero discovered it early this year. The Google researchers reported those flaws to Apple on February 1 with a seven-day deadline for Apple to fix before Google publicly disclosed them. Apple responded with an unscheduled update six days later.

“It feels like the amount of effort that went into the exploits is very significant,” said Charles Holmes, a managing principal research consultant who focuses on mobile security at Atredis partners. “Maintaining capabilities off of the last three years of iOS and a combination of hardware devices and firmware—a lot of time and effort went into that. My gut feels like some nation was behind maintaining that capability.”

By comparison, the outside researchers were quick to point out a detailed analysis of the implant the exploits installed was crude. The malware made no attempt to conceal its processes. More surprising still, it used unencrypted HTTP channels to send login tokens, stored images, and transcripts of messages sent over Gmail, iMessage, WhatsApp, and other programs. The plain-text communications would have made it easy to detect the mass exfiltration of sensitive data by people who monitored the Wi-Fi or enterprise networks infected phones connected to.

Another thing that made the attack unusual was that it targeted every iOS device that visited the hacked websites. Advanced espionage hackers typically try to protect their campaigns and the valuable zerodays they exploit by infecting only individuals of interest. By indiscriminately attacking every iOS visitor, the hackers in this case made it much easier for the campaign to come to light.

“It seems likely that these attackers were probably well funded but didn’t necessarily have a ton of experience doing cyber espionage operations, or didn’t really care if they got caught,” Patrick Wardle, principal security researcher at Jamf, an Apple enterprise software company, told Ars. “If I could guess, a nation state with a ton of money went out and bought a bunch of iOS exploit chains, cobbled them together and indiscriminately began targeting victims to spy on groups of interest.”

While unsophisticated, the implant offered a full feature of capabilities for stealing data from infected devices. The implant was particularly concerned with live location data and with databases for the end-to-end encryption apps WhatsApp, Telegram, and iMessage. Container directories—which store data used by most iOS apps, including unencrypted copies of sent and received messages—were uploaded for all of the following:

  • com.netease.mailmaster
  • com.rebelvox.voxer-lite
  • com.viber
  • ph.telegra.Telegraph
  • com.tencent.qqmail
  • com.atebits.Tweetie2
  • net.whatsapp.WhatsApp
  • com.facebook.Facebook

The attackers could get a listing of all installed apps on an infected device and make an ad-hoc request to download container directories for any specific apps that weren’t on the list. The attackers could also issue an “allapp” command that would download the container directories for all apps on the device. The malware checked an attacker-controlled server every 60 seconds for commands.

The implant also sent attackers a complete copy of the iOS keychain. The keychain contains a large amount of highly sensitive data, including credentials and certificates used to log into services such as Gmail, Facebook, and countless other services and SSIDs and passwords for all saved Wi-Fi access points. The keychain also contains long-lived tokens used by services such as Google’s iOS Single-Sign-On to enable Google apps to access the user’s account. By uploading this data, the attackers could maintain access to the user’s Google account even once the implant is no longer running.

As noted earlier, the installed implant binary doesn’t survive a reboot, meaning a device will be disinfected as soon as its restarted. It’s not clear if the lack of persistence was intentional or the result of developer limitations. In either case, iPhones can go weeks or longer without being rebooted. By that point, the data obtained likely gave attackers other means to continue surveilling targets of interest.

The attack underscores the damage that can result when a device is compromised by even a “one shot” attack that lasts only a short period of time, said Will Strafach, founder of Sudo Security Group and an expert in iOS security.

“Persistence requires additional exploit(s) and allows active surveillance, but increases the possibility of detection,” he said. “A non persistent attack can nab large amounts of historical data which is likely no less damaging, as many folks (even with Signal and WhatsApp) do not routinely delete data, as they assume data-at-rest on their device to be safe.”

Anyone in Cupertino got a fuzzer?

Many of the exploits that installed the implant were the result of flaws the Project Zero researchers said would have been easy to catch with standard quality assurance and code-hardening processes. An exploit chain that targeted iOS versions 11 through 11.4.1 over a 10-month span, for instance, included a sandbox escape that was the result of a “severe security regression” refactoring that mistakenly broke an important security check for validating messages from interprocess communications, or IPC, in iOS.

“It’s difficult to understand how this error could be introduced into a core IPC library that shipped to end users,” Beer wrote. “While errors are common in software development, a serious one like this should have quickly been found by a unit test, code review or even fuzzing. It’s especially unfortunate as this location would naturally be one of the first ones an attacker would look.”

A different exploit chain exploiting iOS 12 and 12.1 similarly exploited vulnerabilities the Project Zero researchers said should have been caught before shipping. Beer wrote:

It’s the kernel bug used here which is, unfortunately, easy to find and exploit (if you don’t believe me, feel free to seek a second opinion!). An IOKit device driver with an external method which in the very first statement performs an unbounded memmove with a length argument directly controlled by the attacker:

ProvInfoIOKitUserClient::ucEncryptSUInfo(char* struct_in,
char* struct_out){

The contents of the struct_in buffer are completely attacker-controlled.

Similar to iOS Exploit Chain 3 [mentioned above], it seems that testing and verification processes should have identified this exploit chain.

Challenging the perceived scarcity of iOS exploits

The campaign is concerning, because it challenges the conventional thinking that iOS vulnerabilities are exploited only in limited instances and then only against highly targeted individuals. The price for a single exploit chain is typically valued in the millions of dollars, in part because of the perceived scarcity of the flaws. The attackers’ ability to continuously exploit vulnerabilities over two years—and to do so in a way that was easy for others to see—demonstrates a new wrinkle to iOS exploitation.

“To see an attacker do this with iOS exploits is interesting,” said Wardle, an Apple security expert who previously worked as a hacker for the National Security Agency. “It shows that this group has no problem acquiring these capabilities and likely can acquire more if needed.”

Apple to replace cracked screens on select aluminum Apple Watch Series 2 and Series 3 models


Apple on Friday launched a repair program to address a “very rare” screen cracking issue reported by owners of older aluminum Apple Watch models, with the company set to replace faulty displays free of charge.

Apple Watch Repair

Cracks might form and propagate around the device display.

Officially filed as an “Exchange and Repair Extension Program,” the replacement initiative covers screens of aluminum Apple Water Series 2 and Series 3 devices that might, “under very rare circumstances,” crack.

According to Apple, impacted units exhibit a crack that forms along the rounded edge of the screen, as shown in the above graphic. The crack might start on one side of the device before propagating to other parts of the screen.

The company did not specify an underlying cause to the problem, but customers have in the past reported damage similar to that detailed in the repair program document. In some cases, user devices exhibited cracks just weeks after purchase, and reportedly without substantial trauma.

Whether the fault lies in the screens or the design of Apple’s wearable remains unknown.

Also unknown is whether Apple plans to reimburse affected customers for the cost of prior screen replacements. The support document does not mention such a provision and Apple is unlikely to offer the option as it would be difficult to verify claims that a watch was impaired by the now known issue.

Apple Watch variants eligible for screen replacement service include 38mm and 42mm aluminum Apple Watch Series 2, Apple Watch Nike+ Series 2 , Apple Watch Series 3 with GPS, Apple Watch Series 3 with GPS + Cellular, Apple Watch Nike+ Series 3 with GPS and Apple Watch Nike+ Series 3 GPS + Cellular models. All colors options are included in the program.

The repair program webpage includes a link to a support document explaining how users can identify their Apple Watch by model number, which can be found in the Apple Watch app for iOS or on the watch casing.

Owners with eligible Apple Watch Series 2 and Series 3 models can take their device to an Apple Authorized Service Provider or brick-and-mortar Apple Store for assessment. Alternatively, customers can arrange to mail in their watch by contacting Apple Support. All repairs will be conducted at an Apple Repair Center and returned in approximately five days, Apple says.

Today’s repair program is the first to address problems with Series 2 and Series 3 devices. Apple in 2017 issued internal repair period extensions for first-generation Apple Watch models suffering from separated back covers and swollen batteries.

Chinese app that makes users look like celebrities goes viral

[Photo/App Store]

A mobile application dubbed ZAO is the latest viral trend that has Chinese internet users doing a double-take over how they might appear as celebrities through artificial intelligence technologies.

From late Friday night, savvy social media users started posting videos starring themselves in footage taken from blockbuster movies or hit TV series. ZAO means “make” or “manufacture “in Chinese.

The app was developed by an internet technology company based in Changsha, Hunan province, whose head Lei Xiaoliang was co-founder of stranger social media app Momo. The company was essentially controlled by Momo, according to Tianyancha, a Chinese corporate information data provider.

To experience the digital makeover, users should upload a photo featuring their full-frontal face with a resolution the system deems qualifiable for the “masquerader”. They also need to follow a string of instructions such as “opening the mouth” or “lifting the head” to authenticate it’s a real person using the app.

Upon that, users can choose from hundreds of video footage and replace a star’s face with that of their own. It normally takes about 10 seconds to finish the face-swapping for a 20-second video.

Creative works sweeping WeChat Moment had almost “paralyzed” the server of ZAO as of Saturday morning, with users reporting malfunction or severe delays in video synthetization.

Every now and then face-changing apps have caught attention on social media, with the likes of FaceApp instantly altering the appearance of a person’s face by adding wrinkles and grey hair. But this one is particularly popular as it churns out videos in motion rather than static pictures.

“I just cannot keep my hands off it,” said Shen from Shanghai, who admitted playing the “mockup” game for the entire night and churning out over 20 videos. “I believe people playing this are truly bringing the character to themselves, and that explains why it successfully got into people’s head.”

But privacy concerns are also there. Users signing up for the service will need to nod to clauses that allow the company to use their original photos, synthesized photos and videos for free and for good on a global scale. The company also retains the right to make modifications on these photos using technologies.

“While such protocols are highly controversial, the app itself seem to really make people happy,” said Cheng Mingxia, assistant dean of Tencent Research Institute. “This is the kind of challenge facing future science and technology applications.”

Apple offers customers even more options for safe, reliable repairs

Cupertino, California — Apple today announced a new repair program, offering customers additional options for the most common out-of-warranty iPhone repairs. Apple will provide more independent repair businesses — large or small —  with the same genuine parts, tools, training, repair manuals and diagnostics as its Apple Authorized Service Providers (AASPs). The program is launching in the US with plans to expand to other countries.

“To better meet our customers’ needs, we’re making it easier for independent providers across the US to tap into the same resources as our Apple Authorized Service Provider network,” said Jeff Williams, Apple’s chief operating officer. “When a repair is needed, a customer should have confidence the repair is done right. We believe the safest and most reliable repair is one handled by a trained technician using genuine parts that have been properly engineered and rigorously tested.”

The new independent repair provider program complements Apple’s continued investment in its growing global network of over 5,000 AASPs that lead the industry for customer satisfaction and help millions of people with both in- and out-of-warranty service for all Apple products.

There is no cost to join Apple’s independent repair program. To qualify for the new program, businesses need to have an Apple-certified technician who can perform the repairs. The process for certification is simple and free of charge. To learn more and apply, visit Qualifying repair businesses will receive Apple-genuine parts, tools, training, repair manuals and diagnostics at the same cost as AASPs.

Over the past year, Apple has launched a successful pilot with 20 independent repair businesses in North America, Europe and Asia who are currently offering genuine parts for repairs. Today’s announcement follows Apple’s recent major expansion of its authorized service network into every Best Buy store in the US, tripling the number of US AASP locations compared to three years ago.

Sony A6600 initial review: Compact, powerful, speedy

Sony has just unleashed its most powerful A6000-series camera to date, and from our first try with it, there’s definitely a lot to like about this new APS-C. It essentially has all the bells and whistles you’d likely find in some of the company’s more expensive full frame cameras, but crams them into a much smaller footprint.

It’s called the A6600 and boasts several improvements over its predecessors; including in body stabilisation, much better battery life, 4K HDR capture, real time face and eye tracking for people and animals, and a svelte design that’s easy to grip and carry all day.

Of course, with its high end features comes a high end price tag, but it’s still suitably cheaper than any of Sony’s A7 series, and that might just make it a winner. Especially when combined with one of Sony’s new high-end G Master lenses.

Stylish, uncomplicated

  • Magnesium alloy
  • Dust/moisture resistant
  • 180 degree tilting touchscreen
  • Integrated headphone jack + mic input

Look at it at arm’s length, and there’s plenty about the A6600’s looks that make it immediately familiar. It’s clearly a Sony A6000-series camera, with its compact rectangular body and the E-Mount lens mount on the front that takes up nearly all of the available space.

Sony A6600 review image 2

Despite its slightly heavier weight (compared to its predecessors), the A6600 really does feel good when held. It’s small, which is great, but it also feels well made, sturdy and durable. That’s predominantly thanks to the magnesium alloy chassis which offers both moisture and dust resistance. 

Part of the joy of holding this camera is also in the design of its grip. It doesn’t feel small or overly cramped, giving you a nice in-hand feel, only helped further by the grippy texture and the rejigged placement of the power/shutter buttons on top. All in all, it’s really easy to use one handed, and doesn’t ever cause tiredness, even when carried around for hours.

It’s not all hunky dory though. Sony has kept with its own version of an articulating screen on the back, which is both great, and not great all at once. We appreciate the sturdiness and strength provided by the hinges, mechanism and framing that holds the little LCD screen in place. That has to be commended. It’s not overly loose, so you can get it precisely how you want it, and know that it’s going to stay there.

It does seem to be missing a trick in places though. When flipped 180-degrees – allowing you to see yourself when shooting a vlog or a selfie – the screen isn’t completely in view. A tiny sliver of the display’s bottom edge seems to be obstructed by the top of the camera. More frustratingly for videographers however, is that it sits almost directly behind the hot-shoe mount. That means if you have a mic, or wireless mic kit mounted on to it, you’re completely blocking the display, rendering it completely useless.

Sony A6600 review image 7

Otherwise, there’s the usual smattering of buttons and controls on the back of the camera, most of which are easy to figure out if you sit down with it for a few minutes, and include the two custom function buttons on the top edge that you can set to control what you want. Whether that be switching between animal and people tracking mode, or something else entirely different.

There are two further custom function buttons on the back, joining the directional pad, menu button, manual/auto focus switch, a primary function button and the gallery access button.

Auto heaven

  • 24.3MP APS-C sensor
  • 23.5 x 15.6mm Exmor CMOS 
  • BIONZ X image processing engine
  • 3:2 ratio photos
  • Fast Hybrid AF – 425 points phase and contrast detection
  • Continuous shooting up to 11fps
  • New Z battery lasts 810 shots

We could list the impressive specs on this compact APS-C camera until they came out of our ears, but to actually use it, you get more of a sense as to how good it can be. The one thing that stood out from our initial test was just how fast the camera is. It focus in no time at all, with a quick half-press of the shutter, and then almost as quickly snaps the shot as soon as you fully depress the shutter button. 


You can see the results of these efforts in the collection above, where we tested it shooting a range of different shots, both close up and zoomed in far away, landscape and portrait. Using the new 16-55mm lens we were able to get some lovely close-up shots with a nice smooth bokeh in the background, with great colours and textures in the image. 

Inside, powering this performance is the same advanced Bionz X image processing engine that’s inside the much larger A9 camera. This – Sony promises – means clear, sharp images with really good low light performance, low noise levels and a wide sensitivity range. 

We only got a few hours to test the camera, and predominantly outdoors in good daylight, so we’ll need to do quite a lot more testing – particularly in low light scenarios – when it comes time to fully review the A6600. 

The thing that Sony was keen for us to try out though was the real time tracking and autofocus features, and given how quickly and accurately it worked in our testing so far, we’d say that Sony has cracked it. 

Sony A6600 review image 8

It not only uses hybrid phase and contrast detection autofocus over 425 points, but also automatically detects faces and eyes, and can track them when they’re moving in the shot, focussing on them. You can even switch it to focus on animal eyes instead if you want. 

4K video chops

  • 4K video – 30fps – HDR (HLG)
  • Super 35mm format
  • 1080p at up to 100fps
  • 5-axis In body stabilisation
  • Real-time tracking/Eye AF

For the videographers and vloggers of the world Sony has ensured the A6500 is equipped with some pretty high performance specs and capabilities. The aim: to give a great B camera experience for professionals while simultaneously giving prosumers and vloggers something that is far more than just capable of shooting video. In fact, it’s a tool that makes shooting great video super simple, or at least, that’s the claim.

Sony A6600 review image 5

All the features that make taking pictures easy all work in video/movie more too. And that means you get the real-time tracking and autofocusing even when you’re shooting in 4K. Combine that with the in body five axis stabilisation and you have something that not only keeps focus on your subject effortlessly, but does it without an excessive amount of shakiness when shooting it handheld.

In practice is pretty impressive. Following a fast-moving pigeon around Copenhagen’s food markets kept the bird in focus easily, quickly and – just as important – silently. Similarly, we were able to film cyclists rushing past while also keeping them in focus throughout the frame.

Most of this was shot using Sony’s new 16-55 mm lens with f/2.8 aperture throughout, and that certainly did no harm. This new lens is quick, and doesn’t lose any light no matter whether you’re zoomed out to its widest 16mm angle or zoomed into 55mm.

Of course, video quality isn’t the only thing that matters to video makers. Sound quality, convenience and practicality matter too, and that’s why Sony included a handful of very necessary ports along one side. Most important are the two 3.5 jacks, for mic/line-in and headphone out. That means you can not only hook up an external microphone (even using a proper mic with XLR, providing you have the right adapter), but you can also monitor your levels with a pair of wired headphones.

Sony A6600 review image 3

For external monitoring of visuals, rather than audio, there’s a mini HDMI port, which allows you to hook your camera up to a screen.

In video terms then, it’s a very versatile camera. Some videographers might bemoan the lack of 4K/60 recording, and if we’re honest, we were a little disappointed not to see it there. However, with the addition of 100p full HD video, you can still shoot sharp video and slow it down without losing any smoothness. It just won’t be at 4K resolution.

As well as the issue with the screen being blocked by any mounted accessories or microphones (which we mentioned earlier) the only other video frustration is that Sony’s camera doesn’t save the video files in the same folder as the photos on the memory card. You have to go digging through folders in the “MISC” file to find them. Once you know where they are, it’s not as big an issue, but it seems a little counter-intuitive. We understand perhaps why you might want them separate, but Sony should at least store them in a primary folder next to “DCIM” that clearly indicates that it has videos in it. 

First Impressions

If what you’re after is as much power as you can get in a small package, Sony’s A6600 is a very tempting offering. The market-leading camera maker has crammed some really high-end specifications and capabilities into a device that’s small enough to carry around every day. 

The lasting impression it left was a winning one – at least on first impressions. Not only is it portable and comfy to take on an afternoon of shooting, it does its job effortlessly and without as much as a whimper. It focuses and snaps images really quickly, with the end results looking beautiful and sharp with great bokeh and colours (thanks to that new 16-55mm lens). 

For the video maker, it promises to be an excellent “B” camera for when you need something powerful that doesn’t take up any extra space, and will give you great looking footage and won’t cause much stress. 

We will need more time with it to give our full verdict, but first impressions as an all-round photo and video device are very good indeed. We can’t wait to spend more time with it. 

Snake oil or genius? Crown Sterling tells its side of Black Hat controversy

Crown Sterling's presentation at Black Hat triggered cryptography experts.
Enlarge / Crown Sterling’s presentation at Black Hat triggered cryptography experts.

Crown Sterling

Robert Grant is a reluctant cryptographer.

“The last thing I would’ve wanted to do is start another company,” Grant, the CEO and founder of Crown Sterling, told Ars. “It’s like my wife asking me if we can have another child… I have two. And I am not looking forward to another child.”

But he and a collaborator believed that they had made a profound discovery, one that would fundamentally shake the core of modern encryption. “We thought, well, just out of a sense of responsibility, we should start a non-factor-based encryption technology,” Grant said. “And that’s what we did with Time AI.”

Crown Sterling claims that its Time AI cryptographic system will fix the breakable-ness of RSA cryptography by using an entirely different method of generating keys, one that doesn’t rely on factoring large prime numbers. Time AI is intended to resist cracking even by advanced quantum computing technology—which has concerned cryptographers because of its potential to more rapidly perform algorithms capable of solving the difficult math problems that cryptography relies on.

Time AI, announced by Grant in a controversial sponsored presentation at Black Hat USA earlier this month, is not yet a product. In fact, Crown Sterling has not published any technical details of how Time AI works. (Grant said that the company is working on a “white paper,” and it should be out by the end of the year.) An academic-style paper published by Grant and presented at Black Hat claims that most Internet cryptography can be cracked, but it has been challenged by mathematicians and cryptographers. And the company’s recent Las Vegas presentation was interrupted by one very persistent heckler and then disavowed by Black Hat, leading to a lawsuit against the conference.

So when Crown Sterling’s spokesperson reached out to offer Ars the company’s side of the story, around both Time AI and the now-legendary Black Hat event, we were eager to hear it.

Who are these guys?

Grant, a self-proclaimed polymath, has a background in the healthcare industry. “I helped lead the Botox brand,” he said. “I was formerly president of Allergan Medical—it’s a multi-billion dollar business. And I launched products that became household names to consumers [such as Natrelle breast implants, Juvederm injectable cosmetic gel, and Lap-Band adjustable gastric bands for weight loss surgery] even though they were sold through intermediaries.”

After leaving Allergan, Grant was president of an eye surgery equipment unit of Bausch and Lomb. When Bausch and Lomb sold that unit out from under him—an experience Grant discussed in his TEDx Orange County talk—he moved back into the “lifestyle health” industry. Almost all of the businesses that operate under the banner of his Strathspey Crown holding company are in some way connected to cosmetic or “wellness” focused health.

Grant claims to speak Japanese, French, Korean, and German fluently. His Crown Sterling biography states that he “holds several patents and various intellectual property in the fields of DNA and phenotypic expression, human cybernetic implantology, biophotonics, and electromagnetism.” And it also states that he “has multiple publications in unified mathematics and physics.”

Grant is also the director of the board of the Resonance Science Foundation, “the intersection of science, community and consciousness.” Grant has produced two video lecture series for Resonance Academy “delegates”. The first is called “The Etymology of Number,” a four-part series that “examines the discovery and evolution of the human understanding of numbers and their role in physics, chemistry, photonics, gravity, music, art, architecture, mathematics, measurement, time and human awareness.” The fourth lecture in the series “culminates in the presentation and discussion of a new unified ‘theory of everything.’”

The second series is called “The Language of Light,” an advanced six-part series that:

dives deeply into the ground-breaking discovery of new mathematical constants, derived from prime number patterns and their interactive role with known constants in forming the universe of geometry and embodied as a beautiful symphony of matter and life. This course attempts to unlock the mysteries of science and esoterica from a holistic perspective, combining history and ancient sites, ageless symbology, polymathic philosophy, biology, musical theory and alchemy… We also explore the practical application of these mathematical discoveries and how they can be utilized along with hertz EMGR (Electro-Magneto-Graviton-Radioactivity) to better understand time, the Inverse Square Law, biology, DNA genotypic and phenotypic expression, vacuum energy and matter transmutation.

Grant is also a scheduled speaker this October at the Conference of Precision and Ancient Knowledge (CPAK), where he will discuss “the real DaVinci Code,” as detailed in this trailer posted by CPAK:

A teaser video for Robert Grant’s CPAK talk.

Joseph Hopkins, Crown Sterling’s chief operating officer, is a senior partner and COO for Grant’s Strathspey Crown, and he also worked at Allergan in sourcing and procurement. Prior to joining Strathspey Crown, Hopkins was a procurement and operations advisory leader at KPMG. He claims to be a “thought-leader in the AI space,” according to his LinkedIn and Crown Sterling biographies, and to have “authored key patent applications about network security, identity verification, content security, as well as network tracking/use verification.”

Alan Green (who, according to the Resonance Foundation website, is a research team member and adjunct faculty for the Resonance Academy) is a consultant to the Crown Sterling team, according to a company spokesperson. Until earlier this month, Green—a musician who was “musical director for Davy Jones of The Monkees”—was listed on the Crown Sterling website as Director of Cryptography. Green has written books and a musical about hidden codes in the sonnets of William Shakespeare.

Many of the people involved in Crown Sterling are connected either to Strathspey Crown or the Resonance Foundation. But Grant insists that Crown Sterling has nothing to do with either of them.

“We are financed by ourselves as individuals, family offices and other accredited investors and there’s no investment whatsoever from Strathspey,” Grant said. The only relationship [to Strathspey Crown] is that my partner, Vic Malik, and myself are the founders of both organizations.”

iPhone exploits in hacked websites went unnoticed for years


Researchers from Google’s Project Zero security initiative on Thursday revealed the discovery of a collection of hacked websites that for years hosted a series of exploits targeting iPhone models up to iPhone X running the current version of iOS 12.


Outlined in a blog post, Google said its Threat Analysis Group (TAG) uncovered the “small collection” of websites earlier this year.

“The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day,” writes Project Zero’s Ian Beer. “There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant.”

Beer estimates the sites receive thousands of visitors per week.

TAG believes the hacks are the work of a bad actor who, over a period of at least two years, conducted an operation to infiltrate select iPhone user demographics targeted by the undisclosed sites. The group found evidence of five unique iPhone exploit chains that cover “almost every version” of iOS from iOS 10 to the current iteration of iOS 12. Impacted iPhones range from iPhone 5s to iPhone X.

In all, Google researchers discovered 14 vulnerabilities impacting iPhone’s web browser, kernel and sandbox security mechanism, one of which was a zero-day.

As noted by Motherboard, which reported on Google’s findings earlier today, the exploits were used to deploy an implant designed to steal files and upload real-time GPS location data. In addition, the implant accessed a user’s keychain, a feature responsible to securely storing passwords and databases of end-to-end encrypted messaging apps like iMessage. It also took copies of Contacts data and Photos, Beer writes.

While the malware is cleaned from an infected iPhone upon rebooting, Beer notes attackers might be able to “maintain persistent access to various accounts and services by using the stolen authentication tokens from the keychain, even after they lose access to the device.” Alternatively, visiting the hacked site would reinstall the implant.

Google informed Apple of the issue on Feb. 1, presenting the company a seven-day window in which to plug the holes. Apple subsequently released a patch with iOS 12.1.4 on Feb. 7 and disclosed Google’s findings in an accompanying support document.

Apple’s iOS 12.1.4 update also patched a pair of Foundation and IOKit flaws discovered by Google’s Project Zero team lead Ben Hawkes. Both zero-day vulnerabilities were used to hack devices in the wild.

Is Golang is the new programming language in town?

Will Golang replace Java as the top choice for Android app development? Will apps like Slotocash Casino mobile app benefit from using Golang in the future?

Is a new programming language needed?

With so many programming languages already in existence, is another programming language needed? In 1973, the C Programming Language became powerful with the introduction of structs. Ten years later, in 1983 C++ came onto the market (C with classes).

The C Programming Language is an excellent programming language when speed and memory usage are a top priority. But both C and C++ were developed at a time before Unicode. And since direct memory manipulation is a major feature of the language, trying to use Unicode in C applications becomes complicated.

The C Programming Language was designed when all characters of all languages fit into 1 byte. A Unicode character can be stored in 1 byte, 2 bytes, or 4 bytes.

On the other end of the spectrum are interpreted languages. Bash, Perl, Python, and PHP are all well known interpreted languages. The code is compiled while it is run. The benefit is that it is not machine-dependent. The downside is speed. So the programming is trading ease for speed.

What about Golang?

Many new programmers ask the question, “What is the best programming language?” or “Which programming language should I learn?” That would be the same as going into Home Depot and asking the store manager, “What is the best tool?” The answer is, “It depends on what you are trying to do.”

Google processes a lot of data. Some are preprocessed and stored in databases. While other data is processed on an as-needed basis. This data comes from all over the world and uses Unicode characters, not ASCII characters.

The designers of Go call Go “what C should have been”, and this can definitely be seen with how Golang handles Unicode characters.

When specifically talking about Unicode characters, they are called “runes”. When talking about ASCII characters, they are called “bytes”. An “Array of bytes” are called “slice of bytes”. An “array of runes” is a “slice of runes”.

When talking about arrays, it is a data type that has overhead connected with it. The same is true with strings. A string in Golang is an advanced data type. It is not the same as a “slice of bytes” which deals directly with the actual memory.

For an experienced programmer just learning Golang, it is a different way of thinking about characters, but when it comes to actually programming with data that is Unicode intensive (for example, Hebrew and Arabic), Golang is a pleasure to work with.

Golang and concurrency

The Go language has built-in facilities, as well as library support, for writing concurrent programs. Concurrency refers not only to CPU parallelism but also to asynchrony: letting slow operations like a database or network-read run while the program does other work, as is common in event-based servers.

The primary concurrency construct is the goroutine, a type of light-weight process. A function call prefixed with the go keyword starts a function in a new goroutine. Current implementations multiplex a Go process’s goroutines onto a smaller set of operating system threads.

While a standard library package featuring most of the classical concurrency control structures (mutex locks, etc.) is available, idiomatic concurrent programs instead prefer channels, which provide send messages between goroutines. Optional buffers store messages in FIFO order and allow sending goroutines to proceed before their messages are received.

Unlike previous concurrent programming languages, Go does not provide any built-in notion of safe or verifiable concurrency. While the communicating-processes model is favored in Go, it is not the only one: all goroutines in a program share a single address space. This means that mutable objects and pointers can be shared between goroutines.

Let’s look at our original example of a Casino. Let’s say it is a very popular casino website, and it gets thousands of visitors an hour. If the program ran in a queue, the first person in line would get to play the game, and when they were finished, the next person in line can play the game. And the queue would run accordingly.

But with concurrency, Golang becomes a gatekeeper to other Golang programs (sub-processes). The first in line tells the gatekeeper what they want to play, and then Golang sends the player off on a new process.

That process does not need to finish before the next customer that wants to play a game puts in their request. Because each game is a separate process, it does not matter how long or how short the game is being played, or how simple or how complex the game is.

Okay, the casino example may not be the best, because a large portion of the casino code is with the user interface (HTML and JavaScript), not server code (Golang).

But you get the idea.

Improving Siri’s privacy protections – Apple

At Apple, we believe privacy is a fundamental human right. We design our products to protect users’ personal data, and we are constantly working to strengthen those protections. This is true for our services as well. Our goal with Siri, the pioneering intelligent assistant, is to provide the best experience for our customers while vigilantly protecting their privacy.

We know that customers have been concerned by recent reports of people listening to audio Siri recordings as part of our Siri quality evaluation process — which we call grading. We heard their concerns, immediately suspended human grading of Siri requests and began a thorough review of our practices and policies. We’ve decided to make some changes to Siri as a result.

How Siri Protects Your Privacy

Siri has been engineered to protect user privacy from the beginning. We focus on doing as much on device as possible, minimizing the amount of data we collect with Siri. When we store Siri data on our servers, we don’t use it to build a marketing profile and we never sell it to anyone. We use Siri data only to improve Siri, and we are constantly developing technologies to make Siri even more private. 

Siri uses as little data as possible to deliver an accurate result. When you ask a question about a sporting event, for example, Siri uses your general location to provide suitable results. But if you ask for the nearest grocery store, more specific location data is used.

If you ask Siri to read your unread messages, Siri simply instructs your device to read aloud your unread messages. The contents of your messages aren’t transmitted to Siri’s servers, because that isn’t necessary to fulfill your request.

Siri uses a random identifier — a long string of letters and numbers associated with a single device — to keep track of data while it’s being processed, rather than tying it to your identity through your Apple ID or phone number — a process that we believe is unique among the digital assistants in use today. For further protection, after six months, the device’s data is disassociated from the random identifier.

In iOS, we offer details on the data Siri accesses, and how we protect your information in the process, in Settings > Siri & Search > About Ask Siri & Privacy.

How Your Data Makes Siri Better

In order for Siri to more accurately complete personalized tasks, it collects and stores certain information from your device. For instance, when Siri encounters an uncommon name, it may use names from your Contacts to make sure it recognizes the name correctly.

Siri also relies on data from your interactions with it. This includes the audio of your request and a computer-generated transcription of it. Apple sometimes uses the audio recording of a request, as well as the transcript, in a machine learning process that “trains” Siri to improve.

Before we suspended grading, our process involved reviewing a small sample of audio from Siri requests — less than 0.2 percent — and their computer-generated transcripts, to measure how well Siri was responding and to improve its reliability. For example, did the user intend to wake Siri? Did Siri hear the request accurately? And did Siri respond appropriately to the request?

Changes We’re Making

As a result of our review, we realize we haven’t been fully living up to our high ideals, and for that we apologize. As we previously announced, we halted the Siri grading program. We plan to resume later this fall when software updates are released to our users — but only after making the following changes:

  • First, by default, we will no longer retain audio recordings of Siri interactions. We will continue to use computer-generated transcripts to help Siri improve. 
  • Second, users will be able to opt in to help Siri improve by learning from the audio samples of their requests. We hope that many people will choose to help Siri get better, knowing that Apple respects their data and has strong privacy controls in place. Those who choose to participate will be able to opt out at any time. 
  • Third, when customers opt in, only Apple employees will be allowed to listen to audio samples of the Siri interactions. Our team will work to delete any recording which is determined to be an inadvertent trigger of Siri.

Apple is committed to putting the customer at the center of everything we do, which includes protecting their privacy. We created Siri to help them get things done, faster and easier, without compromising their right to privacy. We are grateful to our users for their passion for Siri, and for pushing us to constantly improve.

For more information: Siri Privacy and Grading