>this is not really a good way to think about experimental design. this book goes into model development regarding controlling variables and randomizing inputs, and developing counterfactuals.

Expand on this. Why not?

Fundamentally to test in experimental conditions is to test a part of the system in isolation. This requires knowledge of what to isolate, what it does to the the overall system and confidence that all external factors have been accounted for.

Similarly, a model only conveys what we are aware of and its level of complexity is limited by what we think is sensible given available computational capability and the requirements for the model.

Looping back to my usual point, this makes such experimentation highly unreliable when applied to the economy as it is an immensely complex system.

>This course I took around 2013 also goes into what influence different graph nodes implicate on one another. So, no, scale wise, there's not a "hard limit". Not one present in this book's examples anyways.

What do you mean by "hard limit"? To what?

Logically that statement does not align. It sounds like: You took course, therefore there is no hard limit, at least according to the book.

This doesnt make sense. I can't respond to that. Expand please.

>Nope, just that "emergent order" has no boundary to tell which is fictitious and what isn't.

We know a-priori that, say, the solar system emerged and wasnt imposed by deliberate action of some conscious entity, specifically not through human action. I get what you mean, we can't prove it empirically. I argue we don't have to.

>Right, wrong, and "end up" are all functions of input by a biased humanoid. Particularly involving "errors of input", data selection, and adapting older models (knowledge) to newer.

Look, if my goal is to train an AI to identify trees, then a tree would be right, a non tree would be wrong. The fact that we call a stick with dangling bits a "tree" has little significance in this context.

Within the framework that I have outlined emergent order is an order that occurs naturally without deliberate outside interference by means of human action.

Its been a while since I did anything with FPGAs/ASICs but you essentially run a genetic algorithm of sorts to come up with the most efficient design given initial parameters and restrictions.

Either way you clarified that you don't deny existence of complexity as a whole so lets just leave this point be.

>Quite the opposite. I'm an atheist, so I'm still asserting that any aspects of "order" are imposed. We don't have any "somewhat ordered", "non-ordered", "imposed ordered", "emergent ordered", "99.999% deterministic ordered", "43.2% non-deterministic ordered" (....etc...) Universes to compare against. Thus, no demarcation of repeatability and falsifiability. We're stuck with what we got, and any claims of "order" are likely made by someone with something to prove. Teleology ain't a science.

>The fact that you can't articulate this doesn't reveal insidiousness, rather, it reveals the knowledge you've learned is biased. It's like bad set theory by reusing "error", "loss", "noise", and other "wrong" variables in inappropriate contexts.

This looks to me like an argument of definitions with an ultra-empiricist twist.

Arguing definitions across frameworks is pointless. Definitions dont prove anything in their own right but are merely tools to assist in conveying a message.

Lets take a step back, and before we continue this discussion define "emergent order" the way you see it within your framework, then lets compare to the way I define it to see if we are discussing the same thing to begin with.

> because the scale of the system is such that we don't know what to control for

this is not really a good way to think about experimental design. this book goes into model development regarding controlling variables and randomizing inputs, and developing counterfactuals.

This course I took around 2013 also goes into what influence different graph nodes implicate on one another. So, no, scale wise, there's not a "hard limit". Not one present in this book's examples anyways.

>you have asserted that complexity is a work of fiction

Nope, just that "emergent order" has no boundary to tell which is fictitious and what isn't.

>Then letting them run a huge amount of iterations of a genetic algorithm until they can end up with the right answer vast majority of the time.

Right, wrong, and "end up" are all functions of input by a biased humanoid. Particularly involving "errors of input", data selection, and adapting older models (knowledge) to newer.

>Also, like, our universe is an example of emergent order as opposed to imposed order, unless you're quite deeply religious in which case carry on, I won't get in the way of that.

Quite the opposite. I'm an atheist, so I'm still asserting that any aspects of "order" are imposed. We don't have any "somewhat ordered", "non-ordered", "imposed ordered", "emergent ordered", "99.999% deterministic ordered", "43.2% non-deterministic ordered" (....etc...) Universes to compare against. Thus, no demarcation of repeatability and falsifiability. We're stuck with what we got, and any claims of "order" are likely made by someone with something to prove. Teleology ain't a science.

>has on the system in question and the wider system of systems system.

The fact that you can't articulate this doesn't reveal insidiousness, rather, it reveals the knowledge you've learned is biased. It's like bad set theory by reusing "error", "loss", "noise", and other "wrong" variables in inappropriate contexts.

>this is not really a good way to think about experimental design. this book goes into model development regarding controlling variables and randomizing inputs, and developing counterfactuals.

Expand on this. Why not?

Fundamentally to test in experimental conditions is to test a part of the system in isolation. This requires knowledge of what to isolate, what it does to the the overall system and confidence that all external factors have been accounted for.

Similarly, a model only conveys what we are aware of and its level of complexity is limited by what we think is sensible given available computational capability and the requirements for the model.

Looping back to my usual point, this makes such experimentation highly unreliable when applied to the economy as it is an immensely complex system.

>This course I took around 2013 also goes into what influence different graph nodes implicate on one another. So, no, scale wise, there's not a "hard limit". Not one present in this book's examples anyways.

What do you mean by "hard limit"? To what?

Logically that statement does not align. It sounds like: You took course, therefore there is no hard limit, at least according to the book.

This doesnt make sense. I can't respond to that. Expand please.

>Nope, just that "emergent order" has no boundary to tell which is fictitious and what isn't.

We know a-priori that, say, the solar system emerged and wasnt imposed by deliberate action of some conscious entity, specifically not through human action. I get what you mean, we can't prove it empirically. I argue we don't have to.

>Right, wrong, and "end up" are all functions of input by a biased humanoid. Particularly involving "errors of input", data selection, and adapting older models (knowledge) to newer.

Look, if my goal is to train an AI to identify trees, then a tree would be right, a non tree would be wrong. The fact that we call a stick with dangling bits a "tree" has little significance in this context.

>I'm currently planning and executing a FPGA Evolvable Hardware project, where "we don't know what our ignorance looks like" on the circuitry level". Doesn't

default to"emergent order"; no more than a "God of the Gaps"defaults to"Must be God".Within the framework that I have outlined emergent order is an order that occurs naturally without deliberate outside interference by means of human action.

Its been a while since I did anything with FPGAs/ASICs but you essentially run a genetic algorithm of sorts to come up with the most efficient design given initial parameters and restrictions.

Either way you clarified that you don't deny existence of complexity as a whole so lets just leave this point be.

>Quite the opposite. I'm an atheist, so I'm still asserting that any aspects of "order" are imposed. We don't have any "somewhat ordered", "non-ordered", "imposed ordered", "emergent ordered", "99.999% deterministic ordered", "43.2% non-deterministic ordered" (....etc...) Universes to compare against. Thus, no demarcation of repeatability and falsifiability. We're stuck with what we got, and any claims of "order" are likely made by someone with something to prove. Teleology ain't a science.

>The fact that you can't articulate this doesn't reveal insidiousness, rather, it reveals the knowledge you've learned is biased. It's like bad set theory by reusing "error", "loss", "noise", and other "wrong" variables in inappropriate contexts.

This looks to me like an argument of definitions with an ultra-empiricist twist.

Arguing definitions across frameworks is pointless. Definitions dont prove anything in their own right but are merely tools to assist in conveying a message.

Lets take a step back, and before we continue this discussion define "emergent order" the way you see it within your framework, then lets compare to the way I define it to see if we are discussing the same thing to begin with.

> because the scale of the system is such that we don't know what to control for

this is not really a good way to think about experimental design. this book goes into model development regarding controlling variables and randomizing inputs, and developing counterfactuals.

This course I took around 2013 also goes into what influence different graph nodes implicate on one another. So, no, scale wise, there's not a "hard limit". Not one present in this book's examples anyways.

>you have asserted that complexity is a work of fiction

Nope, just that "emergent order" has no boundary to tell which is fictitious and what isn't.

>Then letting them run a huge amount of iterations of a genetic algorithm until they can end up with the right answer vast majority of the time.

Right, wrong, and "end up" are all functions of input by a biased humanoid. Particularly involving "errors of input", data selection, and adapting older models (knowledge) to newer.

I'm currently planning and executing a FPGA Evolvable Hardware project, where "we don't know what our ignorance looks like" on the circuitry level". Doesn't

default to"emergent order"; no more than a "God of the Gaps"defaults to"Must be God".>Also, like, our universe is an example of emergent order as opposed to imposed order, unless you're quite deeply religious in which case carry on, I won't get in the way of that.

Quite the opposite. I'm an atheist, so I'm still asserting that any aspects of "order" are imposed. We don't have any "somewhat ordered", "non-ordered", "imposed ordered", "emergent ordered", "99.999% deterministic ordered", "43.2% non-deterministic ordered" (....etc...) Universes to compare against. Thus, no demarcation of repeatability and falsifiability. We're stuck with what we got, and any claims of "order" are likely made by someone with something to prove. Teleology ain't a science.

>has on the system in question and the wider system of systems system.

The fact that you can't articulate this doesn't reveal insidiousness, rather, it reveals the knowledge you've learned is biased. It's like bad set theory by reusing "error", "loss", "noise", and other "wrong" variables in inappropriate contexts.