Netflix Shares First Six Minutes of New Anime Series ‘Terminator Zero’

“It’s going to be violent,” warns the creator of Terminator Zero, an eight-episode anime series premiering Thursday August 29th on Netflix. “It’s going to be dark, it’s going to be horrific, and it’s going to be arresting.”

And the Netflix blog has now shared the first six minutes online:

In the world of Terminator, the future is never set, yet some things are guaranteed: The Terminator is still a cyborg that feels no remorse, pity, or fear. The anime series TERMINATOR ZERO, landing on Netflix on Aug. 29 — known to fans as Judgment Day — looks different from any incarnation of the Terminator franchise we’ve seen before, but you can tell from these opening six minutes that the brutal, sophisticated action will remain.
“I realized the first minutes of the show have to declare what it is,” creator and executive producer Mattson Tomlin tells Tudum. A joint production between Skydance and the Japanese animation studio Production I.G, TERMINATOR ZERO has the challenge of drawing in both anime fans and fans of the Terminator series. “The way to do that was to have a sequence that had no dialogue, that was really planting a flag in letting everybody know this is going to be violent, it’s going to be dark, it’s going to be action-driven, it’s going to be horrific, and it’s going to be arresting,” says Tomlin, who previously wrote Project Power for Netflix and is currently writing The Batman Part II. “That’s just what it has to be.”

The series follows “a new batch of characters who live in Japan in 1997,” writes CBR — and in an interview the show’s director said “There’s a balance” when representing Japan’s actual culture while keeping the show futuristic:
One of the things that I really took for granted was guns. [Points to self] Dumb American over here had to write a scene where Eiko gets into a parking lot and smashes the window of a car, goes to the glove box, takes out a revolver, and it instantly gets flagged. [Other people working on the series] were like, “No, we don’t have guns. What you are describing, that’s over there. We’re over here in civilization where that can’t happen.” That triggered a really fruitful and creatively challenging discussion about weapons. The military has guns and the police have guns. That’s kind of it. So these characters have to arm themselves. How are they going to do it? What could we do? And that’s why the Terminator has a crossbow. Eiko has all of these different weapons that she concocted from a hardware store. It was all born out of that.

Read more of this story at Slashdot.

Workers at Google DeepMind Push Company to Drop Military Contracts

Nearly 200 Google DeepMind workers signed a letter urging Google to cease its military contracts, expressing concerns that the AI technology they develop is being used in warfare, which they believe violates Google’s own AI ethics principles. “The letter is a sign of a growing dispute within Google between at least some workers in its AI division — which has pledged to never work on military technology — and its Cloud business, which has contracts to sell Google services, including AI developed inside DeepMind, to several governments and militaries including those of Israel and the United States,” reports TIME Magazine. “The signatures represent some 5% of DeepMind’s overall headcount — a small portion to be sure, but a significant level of worker unease for an industry where top machine learning talent is in high demand.” From the report: The DeepMind letter, dated May 16 of this year, begins by stating that workers are “concerned by recent reports of Google’s contracts with military organizations.” It does not refer to any specific militaries by name — saying “we emphasize that this letter is not about the geopolitics of any particular conflict.” But it links out to an April report in TIME which revealed that Google has a direct contract to supply cloud computing and AI services to the Israeli Military Defense, under a wider contract with Israel called Project Nimbus. The letter also links to other stories alleging that the Israeli military uses AI to carry out mass surveillance and target selection for its bombing campaign in Gaza, and that Israeli weapons firms are required by the government to buy cloud services from Google and Amazon.

“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” the letter that circulated inside Google DeepMind says. (Those principles state the company will not pursue applications of AI that are likely to cause “overall harm,” contribute to weapons or other technologies whose “principal purpose or implementation” is to cause injury, or build technologies “whose purpose contravenes widely accepted principles of international law and human rights.”) The letter says its signatories are concerned with “ensuring that Google’s AI Principles are upheld,” and adds: “We believe [DeepMind’s] leadership shares our concerns.” […]

The letter calls on DeepMind’s leaders to investigate allegations that militaries and weapons manufacturers are Google Cloud users; terminate access to DeepMind technology for military users; and set up a new governance body responsible for preventing DeepMind technology from being used by military clients in the future. Three months on from the letter’s circulation, Google has done none of those things, according to four people with knowledge of the matter. “We have received no meaningful response from leadership,” one said, “and we are growing increasingly frustrated.”

Read more of this story at Slashdot.