Michael A. Covington      Michael A. Covington, Ph.D.
Books by Michael Covington
  
Previous months
About this notebook
Search site or Web
Ichthys

Daily Notebook

This web site is protected by copyright law. Reusing pictures or text requires permission from the author.
For more topics, scroll down, press Ctrl-F to search the page, or check previous months.
For the latest edition of this page at any time, create a link to "www.covingtoninnovations.com/michael/blog"
I am a human author. Nothing on this web site is AI-generated unless specifically marked as such.

2025
November
6

Must the Bible be perfect in order to be authoritative?

Doctrinally conservative Christians like myself believe that the Bible is authoritative, i.e., we are not at liberty to defy its teachings. But what does that mean in detail?

Obviously that it is God-given in some special way. But does that mean God dictated every word? And what if the words that have reached us contain evident errors?

Today, at the UGA Christian Faculty Forum, I heard a very good lecture by Mike Lincona. The most important of his many good points was the following.

We know that the Bible, as we have it, contains apparent errors. There is a contradiction about who killed Goliath (1 Sam. 17 describes at detail how David did; 2 Sam. 21:19 says Elhanan did; but 1 Chr. 20:5 says Elhanan killed the brother of Goliath). Luke (19:11-17) apparently jumps track when telling one version of the Parable of the Talents and gets into a different version with only three servants instead of ten; surely Jesus had told both versions, but Luke mixed them together.

Both of these could be errors copying manuscripts. Or they could be errors by the original writers. If the original writers were receiving dictation from God, that is problematic.

But Dr. Licona's point is more subtle. If only the original uncorrupted text is authoritative, then why didn't God preserve it for us? He could have done so and didn't. Apparently God's position is that the Bible is authoritative even with minor textual errors that don't undermine its teaching.

And if that is so, minor incidental errors in the original don't invalidate it either.



The dark matter of pragmatics

Pragmatics is the area of linguistics that studies how, why, and when we say things. It includes discourse structure, context, implicit assumptions, and everything else that goes beyond the pronunciation and literal meaning of what people say.

Why is there such a thing? The main point of The Dark Matter of Pragmatics, by Stephen Levinson, is that we need pragmatics because speech is much slower than thought. We can't utter words fast enough to communicate our complete message. That is why we need a huge repertoire of conventional practices, background assumptions, ways of using context, etc., to enable us to be concise.

By "dark matter" he means the unexplained portions, like dark matter in astronomy. The purpose of the short book is to indicate what areas of pragmatics need further exploration.

Professor Levinson taught one of the first university courses in pragmatics ever, at Cambridge in 1977-78, and I was in the classroom. It was an extremely good course, and I have kept up a long-abiding interest in the subject.



arXiv: the future of scholarly publishing?

As I've said before, I think scholarly journals are on the verge of dying out. They exist because of the high cost of printing and distribution — scientists could not just send their papers to all their colleagues — but wait a minute, now they can! We put our papers on our web pages all the time.

Enter arXiv.org, presumably pronounced "archive" but not to be confused with archive.org.

arXiv was established as a preprint server for some (not all) sciences. We can upload our papers, where they will be made accessible, indexed, and preserved, sparing us the trouble of maintaining a web site and moving it from institution to institution.

What's missing is peer review. Papers on arXiv are checked briefly to ensure that they are indeed scientific research (not political diatribes, ads, etc.) and the authors are correctly identified. But there is no vetting of the quality of the research.

The idea is that after putting your paper on arXiv to reach the audience quickly, you'd proceed to submit it to a journal. But many people skip that step. Many of the most important AI papers, including the one that launched chatbots, don't seem to go beyond arXiv.

I wish a peer review system could be added, reviewing papers after they're posted. It would have to involve more than just readers voting on a paper; reviewers would have to be rated by others as to their reliability. But it could be done.

Anyhow, I just used arXiv to release a corrected version of my classic paper on dependency parsing. A journal wouldn't take a paper that is just a correction of one already published, but arXiv.org is exactly the right place.

<< PREVIOUS MONTH


If what you are looking for is not here, please look at index of all months.