Connect with us

Opinion

3,000 acres of farmland, yet nowhere to build a pool.

Published

8 minute read

There are 3000 acres but nowhere to plant a pool. Councillor Buchanan asked Mr. Curtis the city manager, during budget deliberations November 20, 2018, if there was anywhere north of the river to build a pool, Mr. Curtis said no. Four years ago I was appointed to sit on the Community Action Committee to look at the need or recommendations for an Aquatic Centre in Red Deer. A new Mayor and a new Council and time was of the essence.

The city wanted in 2014 to renovate the downtown pool and it was obvious by the paperwork and answers. They repeated, Michener pool was owned by the province. Timberlands had no room available as it was filled with 3 planned high schools and sports fields. Hazlett Lake was too many years down the road, and the Canada Games was in 2019. It had to be built downtown.
That was 4 years ago and we are no closer to having a 50m pool, than we were 4 years ago. But now the city wants it built in Timberlands near the high schools. It would be like the Collicutt Centre near high schools. Also it will be like the Collicutt, in that it will be east of 30 Ave, between 29 Street and 69 Street. 30 Ave will become like Calgary’s Deerfoot Trail with all that traffic.
In about 300 acres we will have 3 high schools, sports fields, pickle ball courts and an aquatic centre, but in 3,000 acres north of 11A there is no room to park a pool.

Councillor Wong asked about, in my mind, the perfect spot just north of 11a near Hazlett Lake, visible to the QE2 and Mr. Curtis said the road allowance would be too narrow and they would have to buy private land to accommodate the pool and services. That would add uncertainty to the costs, but they do not mention that costs uncertainty when they talk about buying 2.5 acres of private land in the Timberlands. Is it a done deal, have they already made a conditional seal subject to council approval? I do not know.

Mr. Curtis reminded council that the city has had 2 opportunities to build a 50m pool in the last 20 years and they were squandered away. One when building the Collicutt and again when they renovated the downtown pool. Will they do it again?
The city just built the Servus Arena and the college just opened a new ice facility and the city wants to build a new rink to replace the Kinex arena when it fails. The Mayor says Red Deer services what it has. Looks like we will get another new arena, slated for the Dawe but many are expressing doubts about the feasibility of that venture and feel that it will be ultimately built by the Collicutt Centre.

In 2001 the city opened it’s 4th and last pool with a population of about 70,000 people. The city services what it has. If you read the financial statements you will notice 2 things that our population is just shy of 100,000 people and that they expect the Michener Pool will be closed at some point in the next capital budgetary cycle. Leaving us with 3 pools.
If we renovate downtown pool we could be downtown to 2 pools for awhile then back to 3 pools for many years to come. We only build or renovate pools every 20-30 years and by then the Dawe and Collicutt pools will be 70 and 50 years old.
Using the Aquatic Centre as a catalyst to spur development if we built it north of 11a where development has yet to start. Shoehorning it in with 3 new high schools, new sports fields and pickle ball courts will not get the same bang for the buck.
Why not combine the new ice rink and 50 meter pool into a Collicutt style complex in an empty field on the north west corner of Red Deer like we did with the Collicutt Center on the south-east corner of Red Deer.

Spurring development and enjoyed by 60% of recreation facility users in Red Deer, far surpassing all other pools combined.
This will not happen, because the city is too focused on process and awaiting good fortune to come a calling. I watched the debate and I noticed that council sits in a semi-circle facing in and I marvelled at how their attention is focused in and not out.
I also noticed that the Mayor and City Manager sit so much higher than council, reigning supreme over the lowly council. Enforced in my mind by little actions like the Mayor telling a councillor his question has taken 6 minutes, though there are no time lines to follow. Shouldn’t an elected councillor in his elected duties as guardians of the public purse be allowed the same latitude and time as the equally elected mayor?

We have 9 strong and very intelligent elected members looking after our well being, should they not be allowed to bring their strength to the table? We have people trained in law, economics, planning, business, agriculture, politics, law enforcement, education, history, to name but a few. Showcase them don’t muzzle them.

If they have concerns don’t dismiss them or limit their time, get to the bottom of it, that is why we elected them. The city has for many years survived the booms and busts of the Alberta economy but we have not recovered and are not enjoying the rebounding economy like the rest of the province for the last few years, why? Our economy is failing while everyone else is enjoying growth and our declining population is facing more doom and gloom while those around us our seeing positive growth. Perhaps it is time to stop waiting for potential future development to land in our lap and make Red Deer attractive to businesses, residents, tourists and athletes to name but a few.

As one gentleman wrote about us hosting the Canada Games but we can’t hold some of the events, and we are showcasing to the country what we don’t have. Poor publicity. If only we had looked outward and acted those other times. Just saying.

Follow Author

Artificial Intelligence

The Emptiness Inside: Why Large Language Models Can’t Think – and Never Will

Published on

This is a special preview article from the:

By Gleb Lisikh

Early attempts at artificial intelligence (AI) were ridiculed for giving answers that were confident, wrong and often surreal – the intellectual equivalent of asking a drunken parrot to explain Kant. But modern AIs based on large language models (LLMs) are so polished, articulate and eerily competent at generating answers that many people assume they can know and, even
better, can independently reason their way to knowing.

This confidence is misplaced. LLMs like ChatGPT or Grok don’t think. They are supercharged autocomplete engines. You type a prompt; they predict the next word, then the next, based only on patterns in the trillions of words they were trained on. No rules, no logic – just statistical guessing dressed up in conversation. As a result, LLMs have no idea whether a sentence is true or false or even sane; they only “know” whether it sounds like sentences they’ve seen before. That’s why they often confidently make things up: court cases, historical events, or physics explanations that are pure fiction. The AI world calls such outputs
“hallucinations”.

But because the LLM’s speech is fluent, users instinctively project self-understanding onto the model, triggered by the same human “trust circuits” we use for spotting intelligence. But it is fallacious reasoning, a bit like hearing someone speak perfect French and assuming they must also be an excellent judge of wine, fashion and philosophy. We confuse style for substance and
we anthropomorphize the speaker. That in turn tempts us into two mythical narratives: Myth 1: “If we just scale up the models and give them more ‘juice’ then true reasoning will eventually emerge.”

Bigger LLMs do get smoother and more impressive. But their core trick – word prediction – never changes. It’s still mimicry, not understanding. One assumes intelligence will magically emerge from quantity, as though making tires bigger and spinning them faster will eventually make a car fly. But the obstacle is architectural, not scalar: you can make the mimicry more
convincing (make a car jump off a ramp), but you don’t convert a pattern predictor into a truth-seeker by scaling it up. You merely get better camouflage and, studies have shown, even less fidelity to fact.

Myth 2: “Who cares how AI does it? If it yields truth, that’s all that matters. The ultimate arbiter of truth is reality – so cope!”

This one is especially dangerous as it stomps on epistemology wearing concrete boots. It effectively claims that the seeming reliability of LLM’s mundane knowledge should be extended to trusting the opaque methods through which it is obtained. But truth has rules. For example, a conclusion only becomes epistemically trustworthy when reached through either: 1) deductive reasoning (conclusions that must be true if the premises are true); or 2) empirical verification (observations of the real world that confirm or disconfirm claims).

LLMs do neither of these. They cannot deduce because their architecture doesn’t implement logical inference. They don’t manipulate premises and reach conclusions, and they are clueless about causality. They also cannot empirically verify anything because they have no access to reality: they can’t check weather or observe social interactions.

Attempting to overcome these structural obstacles, AI developers bolt external tools like calculators, databases and retrieval systems onto an LLM system. Such ostensible truth-seeking mechanisms improve outputs but do not fix the underlying architecture.

The “flying car” salesmen, peddling various accomplishments like IQ test scores, claim that today’s LLMs show superhuman intelligence. In reality, LLM IQ tests violate every rule for conducting intelligence tests, making them a human-prompt engineering skills competition rather than a valid assessment of machine smartness.

Efforts to make LLMs “truth-seeking” by brainwashing them to align with their trainer’s preferences through mechanisms like RLHF miss the point. Those attempts to fix bias only make waves in a structure that cannot support genuine reasoning. This regularly reveals itself through flops like xAI Grok’s MechaHitler bravado or Google Gemini’s representing America’s  Founding Fathers as a lineup of “racialized” gentlemen.

Other approaches exist, though, that strive to create an AI architecture enabling authentic thinking:

 Symbolic AI: uses explicit logical rules; strong on defined problems, weak on ambiguity;
 Causal AI: learns cause-and-effect relationships and can answer “what if” questions;
 Neuro-symbolic AI: combines neural prediction with logical reasoning; and
 Agentic AI: acts with the goal in mind, receives feedback and improves through trial-and-error.

Unfortunately, the current progress in AI relies almost entirely on scaling LLMs. And the alternative approaches receive far less funding and attention – the good old “follow the money” principle. Meanwhile, the loudest “AI” in the room is just a very expensive parrot.

LLMs, nevertheless, are astonishing achievements of engineering and wonderful tools useful for many tasks. I will have far more on their uses in my next column. The crucial thing for users to remember, though, is that all LLMs are and will always remain linguistic pattern engines, not epistemic agents.

The hype that LLMs are on the brink of “true intelligence” mistakes fluency for thought. Real thinking requires understanding the physical world, persistent memory, reasoning and planning that LLMs handle only primitively or not all – a design fact that is non-controversial among AI insiders. Treat LLMs as useful thought-provoking tools, never as trustworthy sources. And stop waiting for the parrot to start doing philosophy. It never will.

The original, full-length version of this article was recently published as Part I of a two-part series in C2C Journal. Part II can be read here.

Gleb Lisikh is a researcher and IT management professional, and a father of three children, who lives in Vaughan, Ontario and grew up in various parts of the Soviet Union.

Continue Reading

armed forces

Global Military Industrial Complex Has Never Had It So Good, New Report Finds

Published on

 

From the Daily Caller News Foundation

By Wallace White

The global war business scored record revenues in 2024 amid multiple protracted proxy conflicts across the world, according to a new industry analysis released on Monday.

The top 100 arms manufacturers in the world raked in $679 billion in revenue in 2024, up 5.9% from the year prior, according to a new Stockholm International Peace Research Institute (SIPRI) study. The figure marks the highest ever revenue for manufacturers recorded by SIPRI as the group credits major conflicts for supplying the large appetite for arms around the world.

“The rise in the total arms revenues of the Top 100 in 2024 was mostly due to overall increases in the arms revenues of companies based in Europe and the United States,” SIPRI said in their report. “There were year-on-year increases in all the geographical areas covered by the ranking apart from Asia and Oceania, which saw a slight decrease, largely as a result of a notable drop in the total arms revenues of Chinese companies.”

Notably, Chinese arms manufacturers saw a large drop in reported revenues, declining 10% from 2023 to 2024, according to SIPRI. Just off China’s shores, Japan’s arms industry saw the largest single year-over-year increase in revenue of all regions measured, jumping 40% from 2023 to 2024.

American companies dominate the top of the list, which measures individual companies’ revenue, with Lockheed Martin taking the top spot with $64,650,000,000 of arms revenue in 2024, according to the report. Raytheon Technologies, Northrop Grumman and BAE Systems follow shortly after in revenue,

The Czechoslovak Group recorded the single largest jump in year-on-year revenue from 2023 to 2024, increasing its haul by 193%, according to SIPRI. The increase is largely driven by their crucial role in supplying arms and ammunition to Ukraine.

The Pentagon contracted one of the group’s subsidiaries in August to build a new ammo plant in the U.S. to replenish artillery shell stockpiles drained by U.S. aid to Ukraine.

“In 2024 the growing demand for military equipment around the world, primarily linked to rising geopolitical tensions, accelerated the increase in total Top 100 arms revenues seen in 2023,” the report reads. “More than three quarters of companies in the Top 100 (77 companies) increased their arms revenues in 2024, with 42 reporting at least double-digit percentage growth.”

Continue Reading

Trending

X