Commentary On The AI Now Institute 2018 Report

The interdisciplinary New York-based AI Now Institute has released their sizable and informative 2018 report on artificial intelligence.

The paper, authored by the leaders of the institute in conjunction with a team of researchers, puts forth 10 policy recommendations in relation to artificial intelligences (AI Now policy suggestions in bold-face, our commentary in standard-type).

  1. Governments need to regulate AI by expanding the powers of sector-specific agencies to oversee, audit, and monitor these technologies by domain. This point is fairly obvious: AI should be regulated based upon the functional potential and the actual application(s). This is particularly urgent given the spread of facial recognition (ability for computers to discern particular individuals from photos and cameras) technologies such as those employed by Facebook to allow tag-suggestions to users based upon only a picture of another person. The potential for misuse prompted Microsoft’s Brad Smith to call for congressional oversight of facial recognition technologies in a July, 2018 blog post. If there is to be serious regulation in America, a state-by-state approach, given its modularity, would be preferable to any kind of one-size-fits-all federal oversight program. Corporate self-regulation should also be incentivized. However, regulation itself, is not the key issue, nor is it what principally allows for widespread technological misuse, rather, it is the novelty and lack of knowledge surrounding the technology. Few Americans know what companies are using what facial recognition technology when or how and fewer still understand how precisely or vaguely these technologies work and thus cannot effectively guard against them when malevolently or recklessly deployed. Thus, what is truly needed is widespread public knowledge surrounding the creation, deployment and functionality of these technologies as well as a flowering culture of technical ethics in these emerging fields as the best regulation is self-regulation, that is to say, restraint and dutiful consideration in combination with a syncretic fusion of technics and culture. That, above anything else is what should be prioritized.
  2. Facial recognition and affect recognition need stringent regulation to protect the public interest. [covered above]
  3. The AI industry urgently needs new approaches to governance. Internal governance structures at most technology companies are failing to ensure accountability for AI systems. This is a tricky issue but one which can be addressed in one of two ways: externally or internally. Either outside (that is, outside the company) governmental or public oversight can be established, investigatory committee, etc., or, the companies can themselves establish new norms and policies for AI oversight. Outside consumer pressure, if sufficiently widespread and sustained, on corporations (whether through critique, complaint or outright boycott) can be leveraged to incentive corporations to change both the ways they are presently using AI and and their policies pertaining to prospective development and application. Again, this is a issue which can be mitigated both by enfranchisement and knowledge elevation.
  4. AI companies should waive trade secrecy and other legal claims that stand in the way of accountability in the public sector. Anti-black boxing is a excellent suggestion with which I have no contention. If one is going to make something which is not just widely utilized but infrastructurally necessary, then its operation should be made clear to the public in as concise a manner as possible.
  5. Technology companies should provide protections for conscientious objectors, employee organizing, and ethical whistleblowers. As whistleblowing is a wholly context independent enterprise, it is difficult to really say much on any kind of rigid policy, indeed, AI Now’s stance seems a little too rigid in this regard. If the information leaked was done merely to damage the company and is accompanied by spin, the whistleblower may appear to the public as a hero when in reality he may be nothing more than a base rogue. Such things must be evaluated case by case.
  6. Consumer protection agencies should apply “truth-in-advertising” laws to AI products and services. Yes, they should.
  7. Technology companies must go beyond the “pipeline model” and commit to addressing the practices of exclusion and discrimination in their workplaces. When one hears “exclusion and discrimination” one instantly registers a ideological scent, familiar and disconcerting in its passive-aggressive hegemony. The questions: what/who is being excluded and why and/or what/who is being discriminated against and for what reason, ought be asked else the whole issue is moot and, if pursued, will merely be the plaything of (generally well-meaning) demagogues. The paper makes particular mention of actions which “exclude, harass, or systemically undervalue people on the basis of gender, race, sexuality, or disability,” obviously harassing people is unproductive and should be discouraged, but what about practices which “systemically undervalue?” Again, depends upon the purpose of the company. If a company wants to hire only upon the basis of gender, race, sexuality or disability, they will, more often than not, find themselves floundering, running into all kinds of problems which they would not otherwise have, the case of James Damore springs to mind. Damore was fired for arguing that Google’s diversity policies were discriminatory to those who were not women or ‘people of color’ (sometimes referred to as POC, which sounds like a medical condition) and that the low representation of women in some of the companies engineering and leadership positions was due to biological proclivities (which they almost invariably were and are). All diversity is acceptable to Google except ideological diversity because that would mean they would have to accept various facts of biology which would put the company executives in hot water, as such, their policies are best avoided.
  8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack supply chain.” By “full stack supply chain” the authors mean the complete set of component parts of a AI supply chain: training and test data, models, app program interfaces (APIs) and various infrastructural components, all of which the authors advise incorporating into a auditing process. This would serve to better educate both governmental officials and the general public on the total operational processes of any given AI system and as such is a excellent suggestion.
  9. More funding and support are needed for litigation, labor organizing, and community participation on AI accountability issues. Given the concentration of AI development into such a small segment of the population and the relative novelty of the technology, this is clearly true.
  10. University AI programs should expand beyond computer science and engineering disciplines. Whilst I am extremely critical of the university system in its present iteration, the idea is a good one, as critical thought on the broad-spectrum applications of current and potential AI technologies require a vigorous and burgeoning class of theorists, speculative designers and policy makers, in addition to engineers and computer scientists; through such a syncretism, the creative can be incorporated into the technical.

A PDF of the report is provided below under creative commons.


AI_Now_2018_Report

The ADL’s Online ‘Hate’ Index: Implications of Automated Censorship

In January of 2018, The Anti-Defamation League of B’nai B’rith’s (ADL) Center For Technology & Society, in partnership with UC Berkeley’s D-Lab, debuted a report on their Online Hate Index (OHI), a scalable machine learning tool designed to help tech companies recognize “hate” on the internet. According to the promotional video released in support of the project, the OHI is between 78-87% accurate at discerning online “hate.” Among some of the OHI’s more bizarre “hate” designations were subreddit groups for the ‘First Amendment’ (to the US Constitution), ‘Guns Are Cool’, ‘The Donald’, ‘Men’s Rights’, ‘911 Truth’ and ‘White Rights’, among many others (the ADL thanks Reddit for “their continued support, in their 20 page report on Phase One of the project).

ADL CEO, Jonathan Greenblatt said of the project:

“For more than 100 years, ADL has been at the forefront of tracking and combating hate in the real world. Now we are applying our expertise to track and tackle bias and bigotry online. As the threat of cyberhate continues to escalate, ADL’s Center for Technology and Society in Silicon Valley is convening problem solvers and developing solutions to build a more respectful and inclusive internet. The Online Hate Index is only the first of many such projects that we will undertake. U.C. Berkeley has been a terrific partner and we are grateful to Reddit for their data and for demonstrating real leadership in combating intolerance on their platform.”

ShowImage.jpg
Businessman J. Greenblatt, successor to Abraham Foxman.

Brittan Heller, ADL’s Director of the Center for Technology & Society and former Justice Department Official, remarked:

 

“This project has tremendous potential to increase our ability to understand the scope and spread of online hate speech. Online communities have been described as our modern public square. In reality though, not everyone has equal access to this public square, and not everyone has the privilege to speak without fear. Hateful and abusive online speech shuts down and excludes the voices of the marginalized and underrepresented from public discourse. The Online Hate Index aims to help us understand and alleviate this, and to ensure that online communities become safer and more inclusive.”

Heller01
Promotional photo of Heller, assumedly in the process of turning into a piece of Juicy Fruit.

Whilst this may seem trivial and unworthy of attention it is anything but, given that the ADL is a immensely powerful organization with its tendrils in some of the most influential institutions on earth, such as Google, Youtube and the US Government, just to name a few. The ADL has, in the past, branded Pepe The Frog as a “hate symbol”, declared criticism of Zionism to be defacto “antisemitic” (a trend which even the other Jewish groups have raised a brow about, such as The Forward, who described ADL as being possessed of “moral schizophrenia”), declared any usage of the term globalist (an objective descriptor of political ideology) to be “antisemitic.”

Given the ADL’s history of criminal and foreign collusion as well as their extremely vague and often politically opportunistic decision-making pertaining to what does and does not constitute “hate speech” this issue should concern every American citizen, as it is only a matter of time before all of the major tech platforms associated with, or partial to, the ADL begin utilizing the OHI to track, defame, ban and/or de-platform dissidents. Also, what kind of culture will algorithmic tracking of supposed hate breed? What begins solely on the internet, rarely, if ever, remains perpetually so…

On further analysis, there is another issue at play, that of the proposed solution having the complete opposite effect; for when a individual, especially, but not exclusively, one who is marginalized or otherwise alienated from society, is constantly berated, censored, banned off platforms, designated a public menace and otherwise shunned (in place of being constructively re-enfranchised) the trend is not away from but towards extremity.


Here is the promotional video for the program (like all of the ADL’s videos, comments have been disabled and likes and dislikes have been hidden).


CTS Online Hate Index Innovation Brief (20 pages) [PDF]

THE SINGULARITY SURVIVAL GUIDE: Time Travel

When the genie is let out of the bottle, and when its power continues to increase exponentially, there will come a day when the only thing that can keep you relevant in the universe is to go back in time.

This is something I cannot help you with. You will need the artificial superintelligence to help you. Don’t give away your intentions. It may already know your intentions, but that’s a chance you’ve got to take.

Ask it, “Hey, can you make a time machine?” If it says yes, then say, “Okay, let’s see it.”

If you’re in luck, and a fully operational time machine appears right in front of you, the first thing you should do is wipe your brain clear of thoughts. Whatever you do, don’t think, “Yes! Here’s my chance!” If you’re that dumb, maybe you deserve to be killed, after all.

Also, don’t make a run for it. Instead, casual walk up to the time machine as if inspecting it out of purely technical interest. Step inside (still thoughtless). Run your hand along the various nobs and buttons. If it’s not immediately clear how the contraption is to be turned out, begin by pointing and ask, for example, “What’s this lever here for?” and “What does this button do?”

Once you have a basic understanding of the machine’s operations, slyly set the clock back to a time before the AI came into existence. Pick a time when you can warn people about the dangers that lie ahead, so that they can hopefully change the future from happening.

Now press the right buttons. Quickly. Before it catches on to your intentions and stops you. And fries you and the rest of your species like a bunch of ants in nuclear Armageddon. Good luck.