chikim,
@chikim@mastodon.social avatar

@bryansmart Embedding model is only used during indexing. The quality of the answer depends on the model you are chatting with. Because it will read the chunks in text and give you the answer. It really depends on if the LlamaIndex was able to retrieve the relevant chunks or not. You can increase number of chunks and chunk length, but you might end up feeding chunks that are not related to your question. Also there's threshold you can change to filter out below certain simularity score.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • mdbf
  • ethstaker
  • magazineikmin
  • cubers
  • rosin
  • thenastyranch
  • Youngstown
  • osvaldo12
  • slotface
  • khanakhh
  • kavyap
  • InstantRegret
  • Durango
  • JUstTest
  • everett
  • tacticalgear
  • modclub
  • anitta
  • cisconetworking
  • tester
  • ngwrru68w68
  • GTA5RPClips
  • normalnudes
  • megavids
  • Leos
  • provamag3
  • lostlight
  • All magazines