I’m going to stick with the meat of your point. To summarize, …
That is not quite how I see it. The linear diagram “brain -> black box -> mind” represents a common mode of thinking about the mind as a by-product of complex brain activity. Modern theories are a lot more integrative. Conscious perception is not just a byproduct of the form brain -> black box -> mind, but instead it is an essential active element in the thought process.
Ascribing predictions, fantasies, and hypotheses to the brain or calling it a statistical organ sidesteps the hard problem and collapses it into a physicalist view. They don’t posit a mind-body relationship, they speak about body and never acknowledge the mind. I find this frustrating.
That text was probably written by a materialist / physicalist, and this view is consistent within this framework. It is OK that you find this frustrating, and it is also alright if you don’t accept the materialist / physicalist viewpoint. I am not making an argument about materialism being the ultimate truth, or about materialism having all of the answers - especially not answers relating to the hard problem! I am specifically describing how different frameworks held by people who already hold a materialist view can lead to different ways of understanding free will.
Scientists often do sidestep the hard problem in the sense that they acknowledge it to be “hard” and keep moving without dwelling on it. There are many philosophers (David Chalmers, Daniel Dennett, Stuart R. Hameroff), that do like getting into the nitty-gritty about the hard problem, so there is plenty of material about it, but the general consensus is that the answers to the hard problem cannot be find using the materialist’s toolkit.
Materialists have is a mechanism for building consensus via the scientific method. This consensus mechanism has allowed us to understand a lot about the world. I share your frustration in that this class of methods does not seem to be capable of solving the hard problem.
We may never discover a mechanism to build consensus on the hard problem, and unfortunately this means that answers to many very important questions will remain subjective. As an example, if we eventually implement active inference into a computer, and the computer claims to be conscious, we may have no consensus mechanism to determine whether they “really” are conscious or not, just as we cannot ascertain today whether the people around us are conscious. In my opinion, yes, it is physically possible to build conscious systems, and at some point it will get tricky because it will remain a matter of opinion. It will be an extremely polarizing topic.
Yes, the example of such a theory that is common is epiphenomalism. What I am contrasting in my answers is the epiphenomalist/hard-determinist framework with the physicalist/compatibilist one.
I can try to explain with such a diagram:
stimuli -> nerves -> brain input ports -> brain filtering and distribution -> Conscious brain processing via causal predictive modelling -> brain output ports -> nerves -> conscious action | -- > Unconscious processing -> brain output ports -> nerves -> unconscious action
So, the CPM is a process within the brain. The idea is that the brain is a computer that makes predictions by building cause-and-effect models. What is interesting about the mathematics of causal models is that the underlying engine is the counterfactual. The claim being made here is that mind itself is this counterfactual engine doing its work. The computational space that deals with the counterfactuals or “fantasies” is the essence of the mind.
This is not in any way a solution to the hard problem of consciousness. Rather, it is one of many frameworks compatible with physicalism, and it is the one I personally subscribe to. In this framework, it is a postulate that conscious experience corresponds to the brain’s counterfactual simulations within a generative model used for predicting and guiding action. This postulate does not prove or mechanistically explain consciousness. No physical theory currently does.