persagen,
@persagen@mastodon.social avatar

Establishing Trust in ChatGPT Biomedical Generated Text
Ontology-Based Knowledge Graph to Validate Disease-Symptom Links
https://arxiv.org/abs/2308.03929

  • goal: distinguish factual information f. unverified data
  • 1 dataset f. PubMed; vs. ChatGPT simulated articles (AI-generated content)
  • striking number of links among terms in ChatGPT KG, surpassing some of those in PubMed KG
  • see image caption for add. detail

kellogh,
@kellogh@hachyderm.io avatar

@persagen are you aware of similar work but it labels text as being factual in nature — not correct or incorrect, simply the kind of statement that COULD be wrong or subject to hallucinations. This work seems to validate veracity, but what if I just want to figure out if veracity might be in question?

persagen,
@persagen@mastodon.social avatar

@kellogh
1/2
Tough questions! 😅 w/o studying the paper, it seems to be a early yet imp. comparison of a biomed knowledge graph construction using synthetic data generated by ChatGPT vs. published literature.
Issues:

  • ontology depth: can be too coarse/detailed (granularity);
    *GPT / ChatGPT generative/probabilistic; w/o self-supervised augmentation (fact checking) or prompting prone to hallucination/noise (only mentioned 2-3x)
  • different GPT versions/subversions behave differently, retrograde
persagen,
@persagen@mastodon.social avatar

@kellogh
2/4

  • this work is thus more proof of concept effort
  • biomed literature / domain complex: unknown level of this expertise among authors
  • that said v. interesting PoC
  • would be interesting to see LLM triple-based KG, auto fact-checked (parts-of-speech? coreference resolution the hard issue)
  • will be nice to see how this plays out; imo, structured KG (ontological; topics) tremendously important/usefu !
  • noise always an issue, as well (in health) transparency, explanable << NOTE
persagen,
@persagen@mastodon.social avatar

@kellogh
3/4

Can large language models build causal graphs?
https://arxiv.org/abs/2303.05279

Establishing Trust in ChatGPT BioMedical Generated Text: An Ontology-Based Knowledge Graph to Validate Disease-Symptom Links
https://arxiv.org/abs/2307.01128

AI and the transformation of social science research
Careful bias management and data fidelity are key
https://www.science.org/doi/10.1126/science.adi1778

persagen,
@persagen@mastodon.social avatar

@kellogh
4/4

Construction of Knowledge Graphs: State and Challenges
https://arxiv.org/abs/2302.11509

Machine Knowledge: Creation and Curation of Comprehensive Knowledge Bases
https://arxiv.org/abs/2009.11564

A Framework for Large Scale Synthetic Graph Dataset Generation
https://arxiv.org/abs/2210.01944

kellogh,
@kellogh@hachyderm.io avatar

@persagen thank you so much. Solid set of links. There goes my evening!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • llm
  • ngwrru68w68
  • rosin
  • modclub
  • Youngstown
  • khanakhh
  • Durango
  • slotface
  • mdbf
  • cubers
  • GTA5RPClips
  • kavyap
  • DreamBathrooms
  • InstantRegret
  • magazineikmin
  • megavids
  • osvaldo12
  • tester
  • tacticalgear
  • ethstaker
  • Leos
  • thenastyranch
  • everett
  • normalnudes
  • anitta
  • provamag3
  • cisconetworking
  • JUstTest
  • lostlight
  • All magazines