Apple explores artificial intelligence deals with news publishers

Apple logo

Apple has opened negotiations in recent weeks with major news and publishing organizations, seeking permission to use their material in the company’s development and training of generative artificial intelligence systems, The New York Times reports Friday citing “four people familiar with the discussions.”

Benjamin Mullin and Tripp Mickle for The New York Times:

The technology giant has floated multiyear deals worth at least $50 million to license the archives of news articles, said the people with knowledge of talks, who spoke on the condition of anonymity to discuss sensitive negotiations. The news organizations contacted by Apple include Condé Nast, publisher of Vogue and The New Yorker; NBC News; and IAC, which owns People, The Daily Beast, and Better Homes and Gardens.

The technology, which artificial intelligence experts refer to as neural networks, is built by using troves of photos or digital text to recognize patterns. By analyzing thousands of cat photos, for instance, a computer can learn to recognize a cat.

Microsoft, OpenAI, Google, Meta and other companies have released chatbots and other products built with the technology. The tools could change the way people work and generate billions of dollars in sales…

Several publishing executives were concerned that Apple’s terms were too expansive, according to three people familiar with the negotiations. The initial pitch covered broad licensing of publishers’ archives of published content, with publishers potentially on the hook for any legal liabilities that could stem from Apple’s use of their content.

Apple was also vague about how it intended to apply generative A.I. to the news industry, the people said, a potential competitive risk given Apple’s substantial audience for news on its devices.


MacDailyNews Take: We’ll let Google’s Bard remark on this one, verbatim:

The field of generative AI, despite its remarkable progress, carries with it concerns about potential biases introduced during its training, particularly in sensitive areas like politics. Here are some key points to understand:

Data Biases: The data used to train AI models often reflects the biases present in society. This means a model trained on news articles might unintentionally absorb and perpetuate existing political biases, potentially favoring certain viewpoints or ideologies.

Algorithmic Biases: Even with balanced data, the algorithms used for training can introduce unintended biases. This can happen through factors like how the algorithm prioritizes or weights certain information, potentially amplifying certain perspectives and suppressing others.

Output Bias: The bias can manifest in the outputs generated by the AI. For example, if asked to write a political speech, the AI might favor certain candidates or policy positions based on the biases it learned during training.

Impact on Society: Political bias in AI can have harmful consequences. It can lead to unfair discrimination, manipulation of public opinion, and the stifling of diverse viewpoints. For instance, biased AI used in social media algorithms could inadvertently suppress certain political groups’ voices or promote false information.

Addressing the Concerns: Researchers and developers are actively working to mitigate these biases. Some approaches include:

• Using diverse and representative training data: This helps ensure the AI is exposed to a wide range of viewpoints and information.

• Developing debiasing algorithms: These algorithms aim to identify and correct for biases in the training data and algorithms themselves.

• Transparency and explainability: Making the AI’s decision-making process more transparent can help identify and address any biases that might be present.

It’s important to remember that AI is still a young and evolving field. While concerns about bias are valid, there is ongoing research and development to address them. As we continue to develop and refine AI, ensuring responsible and unbiased practices is crucial.

Please help support MacDailyNews. Click or tap here to support our independent tech blog. Thank you!

Support MacDailyNews at no extra cost to you by using this link to shop at Amazon.

10 Comments

  1. Uh, I don’t really understand AI, algorithms, etc, but should I (we) be worried about AI being trained with “approved” news sources? Those that fit the “correct” narrative? Those articles deemed (by someone) to be free from mis, mal, dis information. Hmm… for now I’m goin’ with a big pinch of skeptical

    16
    1
    1. And that is the real issue behind AI, isn’t it. Whoever creates/controls the AI can and will control the narrative they want the AI to tell. Garbage in, garbage out. Lies in, lies out. The morals, beliefs and biases of its programmers will steer the AI to its ‘intelligent’ looking outcomes.

      11
      2
    2. With Apple and other tech companies moving to place the AI on your phone and devices, I’d be more interested in keeping an eye on how many AI’s based on each person’s consumption of media create an assortment of ‘biased’ instances creating more sticky personalized ‘echo chambers’.

      1. Voltaire, You do realize that “correct spelling” is part of the male dominate captilistic patriarchy used to suppress ethic groups and women that don’t have access to your education and privilege. You need to check your systematic advantages at the door. “Stupidity” is a bigoted term. You obviously undervalue the diversity of creative spelling and the culture enrichment of ebonics. Enough with your use of shame for the incorrect use of the white English dialect. Stop using your advantages to oppress alternative voices and marginalizing opposing views with your “make believe” construct of correct spelling.

        7
        7
        1. If you were a regular visitor here, you would know that “Sam” is a regular bigot here, always complaining about “libturds” and hurling offensive abuse at anything he regards as remotely “woke”.

          Perhaps Voltaire is neither educated nor privileged, but simply just tired of Sam’s relentless prejudice.

          6
          4

Reader Feedback (You DO NOT need to log in to comment. If not logged in, just provide any name you choose and an email address after typing your comment below)

This site uses Akismet to reduce spam. Learn how your comment data is processed.