IEEE P7011 Working Group

April 13, 2018 – kick-off meeting minutes

 

IEEE P7011 Working Group

Standard for the Process of Identifying and Rating the Trustworthiness of News Sources

Meeting Minutes

April 13, 2018, 15:00-16:30 US ET

WEBEX

 

  1. Call to Order
    1. Chair called the meeting to order at 15:05
  2. Welcome to IEEE P7011 Working Group
  3. Roll call of Individuals
    1. Contact information will be submitted via e-mail due to time constraints
  4. Approval of Agenda
    1. Chair asked for any objections; there were none
  5. Introduction and Approval of Working Group Policies and Procedures (P&P)
  6. Patent Slide Presentation
    1. Chair reviewed the patent slide and performed a call for patents
  7. Summary of Standard Goals
    1. Develop a standard by which news purveyors can be evaluated.
    2. Blend autonomous analysis with human-driven processes to tackle the large scope of this issue.
    3. Work with industry to adopt, implement, and improve this solution.
    4. Provide the public with an easy-to-understand rating of news purveyors.
  1. Chair reviewed and described each item.
  1. Possible Sub Group Recommended Areas
    1. Factual accuracy
    2. Detection of degree and consistency of bias
    3. Usage of misleading headlines
    4. Existence and utilization of effective retraction policies and procedures
    5. Clear distinctions between advertisements and content
    6. Presentation of ratings to users
    7. Statistical Analysis
    8. Others?
  1. Chair reviewed and described each item.
  1. Vacant Officer Positions
    1. Vice Chair
    2. Secretary
  1. Chair reviewed and described each position.
  1. Website Development
    1. Chair reviewed and described needs.
  2. Technical Editor
    1. Chair reviewed and described needs.
  3. New Business
    1. The chair opened the floor for discussion
  1. Transparent metrics
  2. Desire to learn about colleagues on the team; curiosity about participation of journalism schools and/or ombudsmen/public editors. Any thoughts re: that sort of expertise?
  • What happens if there isn’t unanimous agreement about whether to include some feature, or how to resolve some disagreement?
  1. Two thoughts that I have are validation that the source of the assessment is from this standard, and rating op/ed v. factual news.
  2. Probably add Knowledge Discovery (via data driven analysis) and Hypothesis-driven spaces, since stat analysis is a little constrained.
  3. Rough Consensus, Full Consensus either is noted as I assume are any minority views.
    1. Consensus is sought at the working group level before moving a draft forward to ballot.
    2. Consensus is also sought during the balloting of a draft standard.
  • What is the timeline we are thinking about?
    1. An IEEE-SA project begins with a 4-year lifespan.
  • Will the standard be only about a “rating” of the trustfulness of a source, based on a rating system or also about distinguishing between something stated as fact from something that is “probably” an opinion.
  1. Two thoughts that I have are validation that the source of the assessment is from this standard, and rating op/ed v. factual news.
  2. Why not track both source and publisher? Seems like it is potentially low hanging fruit. Yes, authors and publishers. Especially on social networks where you find references to blogs and also to “fake” publishers…
  3. That it isn’t a vendor saying that their info is A quality.
  • I suspect from a Consumer POV as well we would ask why only focus on “large” publishers …
  • For analytics, probably collapse everything into Knowledge Discovery (data-driven, lexical, social net analysis) and Hypothesis-driven (discriminative, generative models, class discovery, class prediction).
  • I think we may be headed down the path (in the long term) of a non-profit organization that is vetting new sources and vouching potentially for their reputability by providing a metric.
  1. Big question: Would this standard result in global consensus on what letter grade a particular outlet (e.g. NYTimes, InfoWars) gets, or only a process, by which those outlets could get completely different grades from different people or different rating agencies?
  • Metrics for tracking information sources do not need to vet the source. The statistics should be useful for allowing the reader to ‘vet’.
  • Not to pester with the same thought, but in case anyone missed it, a working group process has been initiated with CEN standardization, led by Reporters Without Border, with support from Digital Content Next and others: https://rsf.org/en/news/rsf-and-its-partners-unveil-journalism-trust-initiative-combat-disinformation
    1. IEEE-SA fosters collaboration with Standards Development Organizations and consortia.
  • Is there a link or pointer for Roberts Rules of Order as applies to IEEE? Yes, IEEE follows Robert’s rules of order.
    1. IEEE-SA recommends use of RROO for Working Group proceedings, however, the degree to which this is done is up to the Chair.
    2. IEEE implementation of RROO: https://development.standards.ieee.org/myproject/Public/mytools/mob/robrules.pdf
    3. Full text: http://www.rulesonline.com/
  • I know that there are working groups that are driven by industry to try and tackle this problem, would there be a plan to collaborate with them?
  1. Concern re “State Actors” manipulating ratings
  • Can we learn from Schema.Org (Consortia) ++Reddit, FaceBook, LinkedIn, Google, Bing — Web search for shopping, metatags
  • Concern re “State Actors” manipulating ratings
  • Very early thoughts are that the consideration of including Authors and social media publishers should be considered to be in scope… Obviously this needs lots more discussion
  • From other conversations I have been a part of, including The Trust Project and RSF’s Journalism Trust Initiative, the platform folks are quick to sign on to standards initiatives. …fairly desperate for help in that regard, it seems.
  • I just think that assuming too much about the source of information is dangerous. Ideally an algorithmic approach would not need to distinguish the source type. It seems likely source types will change, so a larger net is advantageous.
  • Would you say it’ll be easier or harder to game a system that discounts source? (That’s not snark — genuine question — thinking like a hacker, etc.)
  • I’d also emphasize that it’s not just the platforms concerned about this — news outlets are very much concerned. Especially trustworthy, independent ones losing money to disinformation. Echoing the need to include the industry in all of these conversations
  • Of course. I would assume that for any unit of information analyzed, there would be some idea of the source of that information. If I am understanding you correctly, there is no advantage to processing that particular data. Or am I misunderstanding you and you’re saying that giving any particular source a particular weight is the problem…?
  • I see this as an auditing task. the intent is to improve all news sources to A’s…. but that gets into “censorship” worries…
  • Yes, I would be extremely cautious about automated content analysis. That would be hugely concerning to folks in the news industry.
  • A ranking system *may* indeed be a tool in that audit (or more encouraging better practice) I suspect. But again, from a Consumer POV any system needs to be predictable, easily understandable, simple and of course trusted, accountable and transparent.
  • Seen in that light it may reduce to that “fake news” label, true. (Two years ago I would have assumed they all wanted to be A’s. Now I wonder if some outlets would consider an A to be selling out.
  • Thousands of years ago I sat on the meetings that turned into the ESRB, and I remember that conversation. I may need to think back hard and see if ancient wisdom can help us not have to re-invent that consumer-uptake wheel 🙂 .
  • Consumers often wish to ‘trust’ a ranking “agent” not understand the mechanisms of the ranking… BUT they also want to be ABLE to ensure such mechanisms are valid and rank ‘agency’ demonstrably unbiased
  • Again, news outlets themselves are key in any standards setting — as are advertisers, with brand incentive
  • A possible aspect to discuss is how to prevent fake news purveyors to learn about the evaluation method and then adopt countermeasures to disguise their fake stories in a format or style which will not be detected by the method. (I apologize in advance for possible idiomatic mistakes on my side).
  • Each of the news sources have their own “standards”, perhaps the “view” is a digraph from each source to another…
  • Fact checkers org: https://www.poynter.org/channels/fact-checking
  • Tim, I think that communication of fact checks to platform is probably more in the territory of schema.org and/or the W3C group https://www.w3.org/community/credibility/
  1. I am saying that a useful algorithm should not be limited to a particular type of source. For example, it may be practical to establish a retrospective error rate for a publication and/or author. Another potentially viable metric might be reliability based on number and duration of publication history. That should apply whether or not it is a publication or a specific author. I imagine that there are a series of potential tools that can be used to help readers and republishers such as fb produce a useful simplified means of flagging material that is less or more reliable (trustworthy).
  1. In German: http://www.business-punk.com/2018/03/botswatch-analyse-fake-news/
  2. I think large consumer base can give feedback about the news they consume, and we can use some algorithm which will take these feedback without biasness and maintaining the diversity to keep different opinions
  • If we make a rating system and have non-profit organizations to implement it, the question will arise about the biasness of the non-profit organization.
  • In case this is interesting: botswatch developed a tool that detects social bots automatically and in real time on twitter. http://botswatch.de/ In German media: http:/www.business-punk.com/2018/03/botswatch-analyse-fake-news/
  • Having IEEE defining or distilling a standard for rating the sources is needed
  1. Large consumer base can give feedback about the news , they consume, and we can use some algorithm which will take these feedback without biasness and maintaining the diversity to keep different opinions
  • They are essential issues to consider and goes very much to “trust”
  • Bias is always an issue, but a public algorithm can be independently validated.
  • Some point about taking feedback from people
  • People can give troll/sarcastic review.
  1. What’s the gold standard by which news is trustworthy? Everyone is biased to some extent.
  • That’s why i think we should take a very large set of people
  • Weighted them against their past record
  • I appreciate this. I am a cybersecurity professor. IEEE USA VP of Communications. Have an interest but can’t commit to a role at this time.
  • We have no good tool for surfacing history of sources.
  1. Next Meeting April 27th @ 15:00
    1. Comment: Time rotation or fixed time is ok as long as it is established in advanced and doodle is fine.
  2. Adjourn