This goes back to what I was saying about genealogical data standards. It is true that there are some errors which are complex fixes and require human supervision; this is true and undeniable as far as I am concerned.
I have been using both the WikiTree merging tools and the various desktop merging tools, and I find them to be very crude and ineffective; they produce way too much cruft and lead to more errors than they fix. I have also talked about the issues of merges from simple unambiguous two profile merges to complex ambiguous multifamily merges; we can distinguish between these cases and classify them with high precision; there is no reason that the kinds of merges that Dale is concerned with should happen by a machine without human supervision.
Corrections shouldn't be made until we know the full consequences of the correction by application of proposed rules, and in a professional program engineering environment, we'd test these things on non-live branches of the WikiTree database thoroughly before we ever even attempted the fixes on the live database. The golden standard of debugging is formal proof of code completeness; you prove that the program has no errors or results in no errors before you run it on live and risky systems; this standard was developed for the NASA Apollo program for their space capsules because errors cost lives.
Beyond that there are also simple errors or trivial errors which can be corrected enmass which would not cost much in terms of computing time or resources and would risk very little--nothing more than is risked with every read and write operation to the active database. For instance, making changes to the birth date field by removing non-date-format information would not damage the database or split profiles or alter relationships between profile entries; it would be relatively simple to identify what kinds of errors exist in the fields by doing machine-read-only surveys to develop samples and devise a parse-and-replace algorithm. As Magnus said, there are a lot of low hanging fruits that we can deal with long before we have to confront the complex issues of mergers.
And again, I will emphasize that this comes back to the long standing and problematic data standards of the genealogical community; the public portion of WikiTree should exist as a Git-like repository independent of the WikiTree project, so we can develop and test software solutions to the major genealogical problems.
"Giving the power to make global changes quickly to someone could result in changes being made that could have just as much potential for error as the situation I described above but on a much larger scale."
This is a problem of centralization vs distribution. The database error correction project should be distributed not centralized as Dale assumes in this assertion.
Finally, what Dale fails to acknowledge in this:
"I hope that we keep on the slow pace the database error project is moving at so that we do not make a large number of unresearched changes and unwittingly do even greater damage to the database."
is there are a lot of errors and unresearched changes being made to the WikiTree database by typical users some of which are likely creating unpredictable damage in the database or damaging the community in subtle ways. To me it seems like Dale is setting up a false dilemma.