## What happens when a reanimated creature returns from exile?

Clash Royale CLAN TAG#URR8PPP

Let’s say I bring a creature (the canonical Runeclaw bear) into play from an opponent’s graveyard using one of the “enchant creature in a graveyard” cards (Necromancy, or Animate Dead, or Dance of the Dead).

Now the Runeclaw Bear is exiled (by a Bishop of Binding, say). Since it leaves the game, Necromancy goes to my graveyard, right?

Later, the Bishop of Binding expires. Thus, the Bear should return from exile. Under my control, now independently of Necromancy?

And what about “when Necromancy leaves the battlefield, that creature’s controller sacrifices it”? Does that simply fizzle?

Let’s say I bring a creature (the canonical Runeclaw bear) into play from an opponent’s graveyard using one of the “enchant creature in a graveyard” cards (Necromancy, or Animate Dead, or Dance of the Dead).

Now the Runeclaw Bear is exiled (by a Bishop of Binding, say). Since it leaves the game, Necromancy goes to my graveyard, right?

Later, the Bishop of Binding expires. Thus, the Bear should return from exile. Under my control, now independently of Necromancy?

And what about “when Necromancy leaves the battlefield, that creature’s controller sacrifices it”? Does that simply fizzle?

Let’s say I bring a creature (the canonical Runeclaw bear) into play from an opponent’s graveyard using one of the “enchant creature in a graveyard” cards (Necromancy, or Animate Dead, or Dance of the Dead).

Now the Runeclaw Bear is exiled (by a Bishop of Binding, say). Since it leaves the game, Necromancy goes to my graveyard, right?

Later, the Bishop of Binding expires. Thus, the Bear should return from exile. Under my control, now independently of Necromancy?

And what about “when Necromancy leaves the battlefield, that creature’s controller sacrifices it”? Does that simply fizzle?

Let’s say I bring a creature (the canonical Runeclaw bear) into play from an opponent’s graveyard using one of the “enchant creature in a graveyard” cards (Necromancy, or Animate Dead, or Dance of the Dead).

Now the Runeclaw Bear is exiled (by a Bishop of Binding, say). Since it leaves the game, Necromancy goes to my graveyard, right?

Later, the Bishop of Binding expires. Thus, the Bear should return from exile. Under my control, now independently of Necromancy?

And what about “when Necromancy leaves the battlefield, that creature’s controller sacrifices it”? Does that simply fizzle?

magic-the-gathering

xebtl

1283

1283

active

oldest

In the examples you provide, yes, exiling and returning the creature will make it a “regular” creature again that no longer requires the enchantment to live. However, it will return to the battlefield under its owner’s control, not yours.

You are correct in most of your assumptions. When a creature enchanted with Necromancy etc. leaves the battlefield, Necromancy no longer enchants a legal target and is put into its owner’s graveyard as a state-based action.

704.5m If an Aura is attached to an illegal object or player, or is not attached to an object or player, that Aura is put into its owner’s graveyard.

As its death trigger, it tries to make you sacrifice the creature it enchanted, but since that creature object no longer exists (it becomes a new object on zone change), that sacrifice simply doesn’t happen. When the previously-enchanted, now exiled creature returns to the battlefield, it becomes a new object again, with no relation to its former existence. At that point, Necromancy is long gone and has no more effect on the Bear.

400.7. An object that moves from one zone to another becomes a new object with no memory of, or relation to, its previous existence. [..]

The bear returns under its owner’s control, i.e. your opponent’s:

610.3. Some one-shot effects cause an object to change zones “until” a specified event occurs. A second one-shot effect is created immediately after the specified event. This second one-shot effect returns the object to its previous zone.

610.3b An object returned to the battlefield this way returns under its owner’s control unless otherwise specified.

Note that there are a few reanimation effects that prevent the order of events as described. For example, the reanimation effect of Isareth the Awakener exiles the Runeclaw Bear when it would leave the battlefield. Even though Bishop of Binding would also exile the creature, Isareth’s exile effect replaces the Bishop’s, which means the Bishop’s never happens, and thus the Bishop leaving the battlefield would not cause the Runeclaw Bear to return.

614.6. If an event is replaced, it never happens. A modified event occurs instead, which may in turn trigger abilities. [..]

• It’s worth explicitly stating, either way, who now controls the bear.
– Pureferret
Nov 23 at 16:01

• Intersting, you reversed the conclusion in the edit :-). Nice answer, well referenced.
– xebtl
Nov 27 at 11:04

I agree with Hackworth’s answer, but i would like to add to the exile/replacement effect.

When applying rule 614.6, we must differentiate events and abilities. The event being replaced may be part of an ability but that doesn’t replace the ability as a whole, nor does it become a seperate effect1.

For convenience’s sake, you may replace the original event with the replacement event in the wording of the ability. This question illustrates the interaction between card tracking and replacement effects far better than I could.

That is to say that applying the replacement effect of Isareth the Awakener on the ability of Bishop of Binding would result in a final ability along the lines of :

When Bishop of Binding enters the battlefield, exile2 target creature an opponent controls until Bishop of Binding leaves the battlefield.

That ability from Bishop of Binding will track the reanimated card to exile and return it at the end of the specified time.

1 The interaction between Kalitas, Traitor of Ghet and Anointed Procession is a good illustration: when both are on the battlefield, Kalitas does not create two tokens as the replacement is not an effect.

2 The original exile has been replaced by Isareth’s exile.

New contributor
PbWO4 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

• Kalitas and Anointed Procession absolutely do create 2 tokens for every replaced creature death, because multiple replacement effects can chain (616.2). Also, the Bishop’s exile effect is a one-shot zone-change effect with a event-triggered expiration (610.3). The return part is set up by that exile effect, but the exile effect is replaced by Isareth. Therefore, the return effect is not set up either.
– Hackworth
Nov 26 at 11:10

• Indeed, the rules have changed for Kalitas since Ixalan (link for the original ruling), my bad. However, I will have to point to the ruling on Roon of the Hidden Realm which has a very similar ability, but can track to exile or command zone depending on the replacement effect.
– PbWO4
Nov 26 at 11:22

• It’s not about being able to track the card or not, it’s about the fact that the Bishop’s exile effect never happens if it gets replaced. (614.6). Just because the replacement effect does superficially the same thing (exiling the creature) doesn’t mean the Bishop’s ability is still in effect and can create its return effect.
– Hackworth
Nov 26 at 11:31

active

oldest

active

oldest

active

oldest

active

oldest

In the examples you provide, yes, exiling and returning the creature will make it a “regular” creature again that no longer requires the enchantment to live. However, it will return to the battlefield under its owner’s control, not yours.

You are correct in most of your assumptions. When a creature enchanted with Necromancy etc. leaves the battlefield, Necromancy no longer enchants a legal target and is put into its owner’s graveyard as a state-based action.

704.5m If an Aura is attached to an illegal object or player, or is not attached to an object or player, that Aura is put into its owner’s graveyard.

As its death trigger, it tries to make you sacrifice the creature it enchanted, but since that creature object no longer exists (it becomes a new object on zone change), that sacrifice simply doesn’t happen. When the previously-enchanted, now exiled creature returns to the battlefield, it becomes a new object again, with no relation to its former existence. At that point, Necromancy is long gone and has no more effect on the Bear.

400.7. An object that moves from one zone to another becomes a new object with no memory of, or relation to, its previous existence. [..]

The bear returns under its owner’s control, i.e. your opponent’s:

610.3. Some one-shot effects cause an object to change zones “until” a specified event occurs. A second one-shot effect is created immediately after the specified event. This second one-shot effect returns the object to its previous zone.

610.3b An object returned to the battlefield this way returns under its owner’s control unless otherwise specified.

Note that there are a few reanimation effects that prevent the order of events as described. For example, the reanimation effect of Isareth the Awakener exiles the Runeclaw Bear when it would leave the battlefield. Even though Bishop of Binding would also exile the creature, Isareth’s exile effect replaces the Bishop’s, which means the Bishop’s never happens, and thus the Bishop leaving the battlefield would not cause the Runeclaw Bear to return.

614.6. If an event is replaced, it never happens. A modified event occurs instead, which may in turn trigger abilities. [..]

• It’s worth explicitly stating, either way, who now controls the bear.
– Pureferret
Nov 23 at 16:01

• Intersting, you reversed the conclusion in the edit :-). Nice answer, well referenced.
– xebtl
Nov 27 at 11:04

In the examples you provide, yes, exiling and returning the creature will make it a “regular” creature again that no longer requires the enchantment to live. However, it will return to the battlefield under its owner’s control, not yours.

You are correct in most of your assumptions. When a creature enchanted with Necromancy etc. leaves the battlefield, Necromancy no longer enchants a legal target and is put into its owner’s graveyard as a state-based action.

704.5m If an Aura is attached to an illegal object or player, or is not attached to an object or player, that Aura is put into its owner’s graveyard.

As its death trigger, it tries to make you sacrifice the creature it enchanted, but since that creature object no longer exists (it becomes a new object on zone change), that sacrifice simply doesn’t happen. When the previously-enchanted, now exiled creature returns to the battlefield, it becomes a new object again, with no relation to its former existence. At that point, Necromancy is long gone and has no more effect on the Bear.

400.7. An object that moves from one zone to another becomes a new object with no memory of, or relation to, its previous existence. [..]

The bear returns under its owner’s control, i.e. your opponent’s:

610.3. Some one-shot effects cause an object to change zones “until” a specified event occurs. A second one-shot effect is created immediately after the specified event. This second one-shot effect returns the object to its previous zone.

610.3b An object returned to the battlefield this way returns under its owner’s control unless otherwise specified.

Note that there are a few reanimation effects that prevent the order of events as described. For example, the reanimation effect of Isareth the Awakener exiles the Runeclaw Bear when it would leave the battlefield. Even though Bishop of Binding would also exile the creature, Isareth’s exile effect replaces the Bishop’s, which means the Bishop’s never happens, and thus the Bishop leaving the battlefield would not cause the Runeclaw Bear to return.

614.6. If an event is replaced, it never happens. A modified event occurs instead, which may in turn trigger abilities. [..]

• It’s worth explicitly stating, either way, who now controls the bear.
– Pureferret
Nov 23 at 16:01

• Intersting, you reversed the conclusion in the edit :-). Nice answer, well referenced.
– xebtl
Nov 27 at 11:04

In the examples you provide, yes, exiling and returning the creature will make it a “regular” creature again that no longer requires the enchantment to live. However, it will return to the battlefield under its owner’s control, not yours.

You are correct in most of your assumptions. When a creature enchanted with Necromancy etc. leaves the battlefield, Necromancy no longer enchants a legal target and is put into its owner’s graveyard as a state-based action.

704.5m If an Aura is attached to an illegal object or player, or is not attached to an object or player, that Aura is put into its owner’s graveyard.

As its death trigger, it tries to make you sacrifice the creature it enchanted, but since that creature object no longer exists (it becomes a new object on zone change), that sacrifice simply doesn’t happen. When the previously-enchanted, now exiled creature returns to the battlefield, it becomes a new object again, with no relation to its former existence. At that point, Necromancy is long gone and has no more effect on the Bear.

400.7. An object that moves from one zone to another becomes a new object with no memory of, or relation to, its previous existence. [..]

The bear returns under its owner’s control, i.e. your opponent’s:

610.3. Some one-shot effects cause an object to change zones “until” a specified event occurs. A second one-shot effect is created immediately after the specified event. This second one-shot effect returns the object to its previous zone.

610.3b An object returned to the battlefield this way returns under its owner’s control unless otherwise specified.

Note that there are a few reanimation effects that prevent the order of events as described. For example, the reanimation effect of Isareth the Awakener exiles the Runeclaw Bear when it would leave the battlefield. Even though Bishop of Binding would also exile the creature, Isareth’s exile effect replaces the Bishop’s, which means the Bishop’s never happens, and thus the Bishop leaving the battlefield would not cause the Runeclaw Bear to return.

614.6. If an event is replaced, it never happens. A modified event occurs instead, which may in turn trigger abilities. [..]

In the examples you provide, yes, exiling and returning the creature will make it a “regular” creature again that no longer requires the enchantment to live. However, it will return to the battlefield under its owner’s control, not yours.

You are correct in most of your assumptions. When a creature enchanted with Necromancy etc. leaves the battlefield, Necromancy no longer enchants a legal target and is put into its owner’s graveyard as a state-based action.

704.5m If an Aura is attached to an illegal object or player, or is not attached to an object or player, that Aura is put into its owner’s graveyard.

As its death trigger, it tries to make you sacrifice the creature it enchanted, but since that creature object no longer exists (it becomes a new object on zone change), that sacrifice simply doesn’t happen. When the previously-enchanted, now exiled creature returns to the battlefield, it becomes a new object again, with no relation to its former existence. At that point, Necromancy is long gone and has no more effect on the Bear.

400.7. An object that moves from one zone to another becomes a new object with no memory of, or relation to, its previous existence. [..]

The bear returns under its owner’s control, i.e. your opponent’s:

610.3. Some one-shot effects cause an object to change zones “until” a specified event occurs. A second one-shot effect is created immediately after the specified event. This second one-shot effect returns the object to its previous zone.

610.3b An object returned to the battlefield this way returns under its owner’s control unless otherwise specified.

Note that there are a few reanimation effects that prevent the order of events as described. For example, the reanimation effect of Isareth the Awakener exiles the Runeclaw Bear when it would leave the battlefield. Even though Bishop of Binding would also exile the creature, Isareth’s exile effect replaces the Bishop’s, which means the Bishop’s never happens, and thus the Bishop leaving the battlefield would not cause the Runeclaw Bear to return.

614.6. If an event is replaced, it never happens. A modified event occurs instead, which may in turn trigger abilities. [..]

edited Nov 26 at 11:15

Hackworth

24.5k262111

24.5k262111

• It’s worth explicitly stating, either way, who now controls the bear.
– Pureferret
Nov 23 at 16:01

• Intersting, you reversed the conclusion in the edit :-). Nice answer, well referenced.
– xebtl
Nov 27 at 11:04

• It’s worth explicitly stating, either way, who now controls the bear.
– Pureferret
Nov 23 at 16:01

• Intersting, you reversed the conclusion in the edit :-). Nice answer, well referenced.
– xebtl
Nov 27 at 11:04

It’s worth explicitly stating, either way, who now controls the bear.
– Pureferret
Nov 23 at 16:01

It’s worth explicitly stating, either way, who now controls the bear.
– Pureferret
Nov 23 at 16:01

Intersting, you reversed the conclusion in the edit :-). Nice answer, well referenced.
– xebtl
Nov 27 at 11:04

Intersting, you reversed the conclusion in the edit :-). Nice answer, well referenced.
– xebtl
Nov 27 at 11:04

I agree with Hackworth’s answer, but i would like to add to the exile/replacement effect.

When applying rule 614.6, we must differentiate events and abilities. The event being replaced may be part of an ability but that doesn’t replace the ability as a whole, nor does it become a seperate effect1.

For convenience’s sake, you may replace the original event with the replacement event in the wording of the ability. This question illustrates the interaction between card tracking and replacement effects far better than I could.

That is to say that applying the replacement effect of Isareth the Awakener on the ability of Bishop of Binding would result in a final ability along the lines of :

When Bishop of Binding enters the battlefield, exile2 target creature an opponent controls until Bishop of Binding leaves the battlefield.

That ability from Bishop of Binding will track the reanimated card to exile and return it at the end of the specified time.

1 The interaction between Kalitas, Traitor of Ghet and Anointed Procession is a good illustration: when both are on the battlefield, Kalitas does not create two tokens as the replacement is not an effect.

2 The original exile has been replaced by Isareth’s exile.

New contributor
PbWO4 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

• Kalitas and Anointed Procession absolutely do create 2 tokens for every replaced creature death, because multiple replacement effects can chain (616.2). Also, the Bishop’s exile effect is a one-shot zone-change effect with a event-triggered expiration (610.3). The return part is set up by that exile effect, but the exile effect is replaced by Isareth. Therefore, the return effect is not set up either.
– Hackworth
Nov 26 at 11:10

• Indeed, the rules have changed for Kalitas since Ixalan (link for the original ruling), my bad. However, I will have to point to the ruling on Roon of the Hidden Realm which has a very similar ability, but can track to exile or command zone depending on the replacement effect.
– PbWO4
Nov 26 at 11:22

• It’s not about being able to track the card or not, it’s about the fact that the Bishop’s exile effect never happens if it gets replaced. (614.6). Just because the replacement effect does superficially the same thing (exiling the creature) doesn’t mean the Bishop’s ability is still in effect and can create its return effect.
– Hackworth
Nov 26 at 11:31

I agree with Hackworth’s answer, but i would like to add to the exile/replacement effect.

When applying rule 614.6, we must differentiate events and abilities. The event being replaced may be part of an ability but that doesn’t replace the ability as a whole, nor does it become a seperate effect1.

For convenience’s sake, you may replace the original event with the replacement event in the wording of the ability. This question illustrates the interaction between card tracking and replacement effects far better than I could.

That is to say that applying the replacement effect of Isareth the Awakener on the ability of Bishop of Binding would result in a final ability along the lines of :

When Bishop of Binding enters the battlefield, exile2 target creature an opponent controls until Bishop of Binding leaves the battlefield.

That ability from Bishop of Binding will track the reanimated card to exile and return it at the end of the specified time.

1 The interaction between Kalitas, Traitor of Ghet and Anointed Procession is a good illustration: when both are on the battlefield, Kalitas does not create two tokens as the replacement is not an effect.

2 The original exile has been replaced by Isareth’s exile.

New contributor
PbWO4 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

• Kalitas and Anointed Procession absolutely do create 2 tokens for every replaced creature death, because multiple replacement effects can chain (616.2). Also, the Bishop’s exile effect is a one-shot zone-change effect with a event-triggered expiration (610.3). The return part is set up by that exile effect, but the exile effect is replaced by Isareth. Therefore, the return effect is not set up either.
– Hackworth
Nov 26 at 11:10

• Indeed, the rules have changed for Kalitas since Ixalan (link for the original ruling), my bad. However, I will have to point to the ruling on Roon of the Hidden Realm which has a very similar ability, but can track to exile or command zone depending on the replacement effect.
– PbWO4
Nov 26 at 11:22

• It’s not about being able to track the card or not, it’s about the fact that the Bishop’s exile effect never happens if it gets replaced. (614.6). Just because the replacement effect does superficially the same thing (exiling the creature) doesn’t mean the Bishop’s ability is still in effect and can create its return effect.
– Hackworth
Nov 26 at 11:31

I agree with Hackworth’s answer, but i would like to add to the exile/replacement effect.

When applying rule 614.6, we must differentiate events and abilities. The event being replaced may be part of an ability but that doesn’t replace the ability as a whole, nor does it become a seperate effect1.

For convenience’s sake, you may replace the original event with the replacement event in the wording of the ability. This question illustrates the interaction between card tracking and replacement effects far better than I could.

That is to say that applying the replacement effect of Isareth the Awakener on the ability of Bishop of Binding would result in a final ability along the lines of :

When Bishop of Binding enters the battlefield, exile2 target creature an opponent controls until Bishop of Binding leaves the battlefield.

That ability from Bishop of Binding will track the reanimated card to exile and return it at the end of the specified time.

1 The interaction between Kalitas, Traitor of Ghet and Anointed Procession is a good illustration: when both are on the battlefield, Kalitas does not create two tokens as the replacement is not an effect.

2 The original exile has been replaced by Isareth’s exile.

New contributor
PbWO4 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

I agree with Hackworth’s answer, but i would like to add to the exile/replacement effect.

When applying rule 614.6, we must differentiate events and abilities. The event being replaced may be part of an ability but that doesn’t replace the ability as a whole, nor does it become a seperate effect1.

For convenience’s sake, you may replace the original event with the replacement event in the wording of the ability. This question illustrates the interaction between card tracking and replacement effects far better than I could.

That is to say that applying the replacement effect of Isareth the Awakener on the ability of Bishop of Binding would result in a final ability along the lines of :

When Bishop of Binding enters the battlefield, exile2 target creature an opponent controls until Bishop of Binding leaves the battlefield.

That ability from Bishop of Binding will track the reanimated card to exile and return it at the end of the specified time.

1 The interaction between Kalitas, Traitor of Ghet and Anointed Procession is a good illustration: when both are on the battlefield, Kalitas does not create two tokens as the replacement is not an effect.

2 The original exile has been replaced by Isareth’s exile.

New contributor
PbWO4 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

New contributor
PbWO4 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

PbWO4

1

1

New contributor
PbWO4 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

New contributor

PbWO4 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

PbWO4 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

• Kalitas and Anointed Procession absolutely do create 2 tokens for every replaced creature death, because multiple replacement effects can chain (616.2). Also, the Bishop’s exile effect is a one-shot zone-change effect with a event-triggered expiration (610.3). The return part is set up by that exile effect, but the exile effect is replaced by Isareth. Therefore, the return effect is not set up either.
– Hackworth
Nov 26 at 11:10

• Indeed, the rules have changed for Kalitas since Ixalan (link for the original ruling), my bad. However, I will have to point to the ruling on Roon of the Hidden Realm which has a very similar ability, but can track to exile or command zone depending on the replacement effect.
– PbWO4
Nov 26 at 11:22

• It’s not about being able to track the card or not, it’s about the fact that the Bishop’s exile effect never happens if it gets replaced. (614.6). Just because the replacement effect does superficially the same thing (exiling the creature) doesn’t mean the Bishop’s ability is still in effect and can create its return effect.
– Hackworth
Nov 26 at 11:31

• Kalitas and Anointed Procession absolutely do create 2 tokens for every replaced creature death, because multiple replacement effects can chain (616.2). Also, the Bishop’s exile effect is a one-shot zone-change effect with a event-triggered expiration (610.3). The return part is set up by that exile effect, but the exile effect is replaced by Isareth. Therefore, the return effect is not set up either.
– Hackworth
Nov 26 at 11:10

• Indeed, the rules have changed for Kalitas since Ixalan (link for the original ruling), my bad. However, I will have to point to the ruling on Roon of the Hidden Realm which has a very similar ability, but can track to exile or command zone depending on the replacement effect.
– PbWO4
Nov 26 at 11:22

• It’s not about being able to track the card or not, it’s about the fact that the Bishop’s exile effect never happens if it gets replaced. (614.6). Just because the replacement effect does superficially the same thing (exiling the creature) doesn’t mean the Bishop’s ability is still in effect and can create its return effect.
– Hackworth
Nov 26 at 11:31

Kalitas and Anointed Procession absolutely do create 2 tokens for every replaced creature death, because multiple replacement effects can chain (616.2). Also, the Bishop’s exile effect is a one-shot zone-change effect with a event-triggered expiration (610.3). The return part is set up by that exile effect, but the exile effect is replaced by Isareth. Therefore, the return effect is not set up either.
– Hackworth
Nov 26 at 11:10

Kalitas and Anointed Procession absolutely do create 2 tokens for every replaced creature death, because multiple replacement effects can chain (616.2). Also, the Bishop’s exile effect is a one-shot zone-change effect with a event-triggered expiration (610.3). The return part is set up by that exile effect, but the exile effect is replaced by Isareth. Therefore, the return effect is not set up either.
– Hackworth
Nov 26 at 11:10

Indeed, the rules have changed for Kalitas since Ixalan (link for the original ruling), my bad. However, I will have to point to the ruling on Roon of the Hidden Realm which has a very similar ability, but can track to exile or command zone depending on the replacement effect.
– PbWO4
Nov 26 at 11:22

Indeed, the rules have changed for Kalitas since Ixalan (link for the original ruling), my bad. However, I will have to point to the ruling on Roon of the Hidden Realm which has a very similar ability, but can track to exile or command zone depending on the replacement effect.
– PbWO4
Nov 26 at 11:22

It’s not about being able to track the card or not, it’s about the fact that the Bishop’s exile effect never happens if it gets replaced. (614.6). Just because the replacement effect does superficially the same thing (exiling the creature) doesn’t mean the Bishop’s ability is still in effect and can create its return effect.
– Hackworth
Nov 26 at 11:31

It’s not about being able to track the card or not, it’s about the fact that the Bishop’s exile effect never happens if it gets replaced. (614.6). Just because the replacement effect does superficially the same thing (exiling the creature) doesn’t mean the Bishop’s ability is still in effect and can create its return effect.
– Hackworth
Nov 26 at 11:31

Thanks for contributing an answer to Board & Card Games Stack Exchange!

But avoid

• Making statements based on opinion; back them up with references or personal experience.

Please pay close attention to the following guidance:

But avoid

• Making statements based on opinion; back them up with references or personal experience.

draft saved

function () {
}
);

### Post as a guest

Required, but never shown

Required, but never shown

Required, but never shown

Required, but never shown

Required, but never shown

Required, but never shown

Required, but never shown

Required, but never shown

Required, but never shown

## Battle of Philomelion

Byzantine-Seljuq wars
Date autumn 1116 Philomelion, Asia Minor Byzantine victory[1]
Belligerents
Byzantine Empire Sultanate of Rum
Alexios I Komnenos Sultan Malik Shah
Strength
Unknown Unknown

The Battle of Philomelion (Latinised as Philomelium – modern Akşehir) of 1116[2] consisted of series of clashes over a number of days between a Byzantine expeditionary army under Emperor Alexios I Komnenos and the forces of the Sultanate of Rûm under Sultan Malik Shah; it occurred in the course of the Byzantine-Seljuq wars. The Seljuk forces attacked the Byzantine army a number of times to no effect; having suffered losses to his army in the course of these attacks, Malik Shah sued for peace.

## Contents

• 1 Background
• 2 Byzantine advance to Philomelion
• 3 Battle
• 4 Aftermath
• 5 Notes
• 6 References

## Background

Following the success of the First Crusade, the Byzantine armed forces, led by John Doukas the megas doux, reconquered the Aegean coastline and much of the interior of western Anatolia. However, after the failure of the Crusade of 1101, the Seljuq and Danishmend Turks resumed their offensive operations against the Byzantines. Following their defeats, the Seljuqs under Malik Shah had recovered control of central Anatolia, re-consolidating a viable state around the city of Iconium. Emperor Alexios I Komnenos, aged and suffering from an illness which proved to be terminal, was unable to prevent Turkish raids into the recovered areas of Byzantine Anatolia, though an attempt to take Nicaea in 1113 was thwarted by the Byzantines. In 1116 Alexios was able to personally take the field and was engaged in defensive operations in northwest Anatolia. Basing his army at Lopadion, and later at Nicomedia, he succeeded in defeating raiding Turks in a minor battle at Poemanenon.[3] After receiving reinforcements to his army Alexios decided to move onto the offensive.[4]

Emperor Alexios I

In the campaign of Philomelion Alexios led a sizeable Byzantine army deep into the Anatolian interior. Anna Komnene, the primary source for the campaign, implies that the Seljuq capital of Iconium was the goal of the expedition, but evidently Alexios abandoned this plan and contented himself with staging a conspicuous show of force and evacuating the native Christian population from the Turkish dominated areas his army passed through.[5] The Byzantines were to employ a new battle formation of Alexios’ devising, the parataxis. Anna Komnene’s description of this formation is so imprecise as to be useless.[6] However, from her account of the army in action the nature of the parataxis is revealed; it was a defensive formation, a hollow square with the baggage in the centre, infantry on the outside and cavalry in-between, from whence they could mount attacks.[7] An ideal formation for tackling the fluid Turkish battle tactics, reliant on swarm attacks by horse-archers. A similar formation was later employed by Richard I of England at the Battle of Arsuf.

The Byzantines moved through Santabaris, sending detachments via Polybotos and Kedros, and, after dispersing Turkish resistance, took Philomelion by assault. Parties of scouts were then sent out to round up the local Christian population for evacuation to areas under firm Byzantine control.[8][9]

## Battle

Alexios became aware that a substantial Seljuq army was approaching from the north and began his retreat to his own territory. His army resumed its defensive formation with the civilians accompanying the baggage in the centre. The Turks, under an officer called Manalugh, were initially baffled by the Byzantine formation and did not attack with any vigour. However, the following day Sultan Malik Shah arrived and the Byzantines were attacked in earnest.[10] The Turks mounted a simultaneous attack on the van and rear of the Byzantine army. The Byzantine cavalry made two counterattacks, the first seems to have been unsuccessful.[11] A further counterattack was more fortunate, led by Nikephoros Bryennios the Younger (Anna Komnene’s husband and Alexios’ son-in-law) the leader of the Byzantine right wing, it broke that part of the Turkish force led in person by the Sultan, which then turned to flight. Malik Shah narrowly escaped capture.[12] The Seljuqs then made a night attack, but the Byzantine dispositions again frustrated them. The following day Malik Shah again attacked, his troops completely surrounding the Byzantine army on all sides. The Turks were once more repulsed with loss, having achieved nothing. The next day Malik-Shah sent to Alexios with proposals for peace.[13][14]

## Aftermath

Alexios and Malik Shah met, Alexios throwing his own costly cloak around the sultan’s shoulders. A peace involving an undertaking by Malik Shah to stop Turkish raiding and an admission by the sultan of some measure of, largely theoretical, dependence on the Byzantine emperor was made. Anna Komnene records that the peace treaty involved an undertaking by Malik Shah to evacuate Anatolia, but this is unlikely in the extreme and must represent hyperbole on her part.[15] The campaign was remarkable for the high level of discipline shown by the Byzantine army. Alexios had demonstrated that he could march his army with impunity through Turkish dominated territory.[16] The reverse suffered by Malik Shah at Philomelion and the consequent loss of prestige probably contributed to his demise as he was soon deposed, blinded and eventually murdered by his brother Mas’ud.[17]
Alexios’ death in 1118 meant that the ambition of reconquering all of Asia Minor was left to his 31-year-old son, John II Komnenos.

## Notes

1. ^ Norwich, John Julius (1997). A Short History of Byzantium. New York: Vintage Books. p. 264..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:”””””””‘””‘”}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url(“//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png”)no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url(“//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png”)no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url(“//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png”)no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
2. ^ Venning, Timothy; Frankopan, Peter (2015). A Chronology of the Crusades. Routledge. p. 77. ISBN 9781317496427.
3. ^ Birkenmeier, p.78
4. ^ Birkenmeier, p.79
5. ^ Komnene, ed. Sewter (1969), p.481
6. ^ Komnene, ed. Sewter (1969), pp. 479-480
7. ^ Birkenmeier, p.79
8. ^ Komnene, ed. Sewter (1969), p.483
9. ^ Birkenmeier, p.79
10. ^ Komnene, ed. Sewter (1969), p.484-485
11. ^ Komnene, ed. Sewter (1969), p.485
12. ^ Komnene, ed. Sewter (1969), p.486
13. ^ Komnene, ed. Sewter (1969), pp.486-487
14. ^ Birkenmeier, p.79 footnote
15. ^ Komnene pp. 487-488
16. ^ Birkenmeier, pp. 79-80
17. ^ Komnene, pp. 488-491

## References

• Birkenmeier, John W. (2002). The Development of the Komnenian Army: 1081–1180. Brill. ISBN 90-04-11710-5.
• Komnene (Comnena), Anna; Sewter, Edgar Robert Ashton (1969). The Alexiad of Anna Comnena translated by Edgar Robert Ashton Sewter. Penguin Classics. ISBN 0-14-044215-4.
• Norwich, John Julius (1997). A Short History of Byzantium. New York: Vintage Books.
• Beihammer, Alexander Daniel (2017). Byzantium and the Emergence of Muslim-Turkish Anatolia, ca. 1040–1130. Taylor & Francis. ISBN 978-1-351-98386-0.

• Komnenian Byzantine army
• Komnenian restoration

Coordinates: 38°21′27″N 31°24′59″E﻿ / ﻿38.35750°N 31.41639°E﻿ / 38.35750; 31.41639

## How to power on an external hard-drive after powering it off?

Clash Royale CLAN TAG#URR8PPP

When I “safely remove” an external hard-drive from my file-manager (Thunar), the whole hard-drive is powered off and disappears from /dev. Therefore, I guess that under the hood, this is done by calling udisksctl power-off -b /dev/sdX which has the same effect.

I thought it should somehow be possible to bring the device up again. After having read https://stackoverflow.com/a/12675749, I thought that powering off is maybe done by writing to /sys/bus/usb/devices/usbX/power/control, but the sysfs seems to remain untouched.

So, how is it possible to power-on an external device again after powering it off with udisksctl? To me, it is annoying that I can not re-mount a partition after unmounting it from the file manager.

When I “safely remove” an external hard-drive from my file-manager (Thunar), the whole hard-drive is powered off and disappears from /dev. Therefore, I guess that under the hood, this is done by calling udisksctl power-off -b /dev/sdX which has the same effect.

I thought it should somehow be possible to bring the device up again. After having read https://stackoverflow.com/a/12675749, I thought that powering off is maybe done by writing to /sys/bus/usb/devices/usbX/power/control, but the sysfs seems to remain untouched.

So, how is it possible to power-on an external device again after powering it off with udisksctl? To me, it is annoying that I can not re-mount a partition after unmounting it from the file manager.

When I “safely remove” an external hard-drive from my file-manager (Thunar), the whole hard-drive is powered off and disappears from /dev. Therefore, I guess that under the hood, this is done by calling udisksctl power-off -b /dev/sdX which has the same effect.

I thought it should somehow be possible to bring the device up again. After having read https://stackoverflow.com/a/12675749, I thought that powering off is maybe done by writing to /sys/bus/usb/devices/usbX/power/control, but the sysfs seems to remain untouched.

So, how is it possible to power-on an external device again after powering it off with udisksctl? To me, it is annoying that I can not re-mount a partition after unmounting it from the file manager.

When I “safely remove” an external hard-drive from my file-manager (Thunar), the whole hard-drive is powered off and disappears from /dev. Therefore, I guess that under the hood, this is done by calling udisksctl power-off -b /dev/sdX which has the same effect.

I thought it should somehow be possible to bring the device up again. After having read https://stackoverflow.com/a/12675749, I thought that powering off is maybe done by writing to /sys/bus/usb/devices/usbX/power/control, but the sysfs seems to remain untouched.

So, how is it possible to power-on an external device again after powering it off with udisksctl? To me, it is annoying that I can not re-mount a partition after unmounting it from the file manager.

mount external-hdd udisks

Binabik

1113

1113

active

oldest

active

oldest

active

oldest

active

oldest

active

oldest

Thanks for contributing an answer to Unix & Linux Stack Exchange!

But avoid

• Making statements based on opinion; back them up with references or personal experience.

Please pay close attention to the following guidance:

But avoid

• Making statements based on opinion; back them up with references or personal experience.

draft saved

function () {
}
);

### Post as a guest

Required, but never shown

Required, but never shown

Required, but never shown

Required, but never shown

Required, but never shown

Required, but never shown

Required, but never shown

Required, but never shown

Required, but never shown

## Anatolia

Anatolia

The traditional definition of Anatolia within modern Turkey[1][2]
Geography
Location
• Western Asia
• Middle East
Coordinates 39°N 35°E﻿ / ﻿39°N 35°E﻿ / 39; 35Coordinates: 39°N 35°E﻿ / ﻿39°N 35°E﻿ / 39; 35
Area 756,000 km2 (292,000 sq mi)[3]
Turkey
Capital and largest city Ankara (pop. 5,270,575[4])
Demographics
Demonym Anatolian
Languages Turkish, Kurdish, Armenian, Greek, Arabic, Kabardian, various others
Ethnic groups Turks, Kurds, Armenians, Greeks, Arabs, Laz, various others

Anatolia (from Greek Ἀνατολή Anatolḗ; Turkish: Anadolu “east” or “[sun]rise”), also known as Asia Minor (Medieval and Modern Greek: Μικρά Ἀσία Mikrá Asía, “small Asia”; Turkish: Küçük Asya), Asian Turkey, the Anatolian peninsula, or the Anatolian plateau, is the westernmost protrusion of Asia, which makes up the majority of modern-day Turkey. The region is bounded by the Black Sea to the north, the Mediterranean Sea to the south, the Armenian Plateau to the east, and the Aegean Sea to the west. The Sea of Marmara forms a connection between the Black and Aegean Seas through the Bosphorus and Dardanelles straits and separates Anatolia from Thrace on the European mainland.

Traditionally, Anatolia is considered to extend in the east to a line between the Gulf of Alexandretta and the Black Sea to the Armenian Highlands (Armenia Major). This region is now named and largely situated in the Eastern Anatolia Region of the far north east of Turkey and converges with the Lesser Caucasus – an area that was incorporated in the Russian Empire region of Transcaucasia in the 19th century.[5][6] Thus, traditionally Anatolia is the territory that comprises approximately the western two-thirds of the Asian part of Turkey.

Anatolia is often considered to be synonymous with Asian Turkey, which comprises almost the entire country;[7] its eastern and southeastern borders are widely taken to be the Turkish borders with neighboring Georgia, Armenia, Azerbaijan, Iran, Iraq, and Syria, in clockwise direction.

The ancient inhabitants of Anatolia spoke the now-extinct Anatolian languages, which were largely replaced by the Greek language starting from classical antiquity and during the Hellenistic, Roman and Byzantine periods. Major Anatolian languages included Hittite, Luwian, and Lydian among other more poorly attested relatives. The Turkification of Anatolia began under the Seljuk Empire in the late 11th century and continued under the Ottoman Empire between the early 14th and early 20th centuries. However, various non-Turkic languages continue to be spoken by minorities in Anatolia today, including Kurdish, Neo-Aramaic, Armenian, Arabic, Laz, Georgian and Greek. Other ancient peoples in the region included Galatians, Hurrians, Assyrians, Hattians, Cimmerians, as well as Ionian, Dorian, and Aeolian Greeks.

## Contents

• 1 Definition
• 2 Onomastics and etymology
• 3 History

• 3.1 Prehistory
• 3.2 Ancient Near East (Bronze and Iron Ages)

• 3.2.1 Hattians and Hurrians
• 3.2.2 Assyrian Empire (21st–18th centuries BC)
• 3.2.3 Hittite Kingdom and Empire (17th–12th centuries BC)
• 3.2.4 Neo-Hittite kingdoms (c. 1180–700 BC)
• 3.2.5 Neo-Assyrian Empire (10th–7th centuries BC)
• 3.2.6 Cimmerian and Scythian invasions (8th–7th centuries BC)
• 3.2.7 Greek West
• 3.3 Classical Antiquity
• 3.4 Early Christian Period
• 3.5 Late Medieval Period
• 3.6 Ottoman Empire
• 3.7 Modern times
• 4 Geography
• 5 Geology

• 5.1 Climate
• 5.2 Ecoregions
• 6 Demographics
• 7 Cuisine
• 9 References
• 10 Bibliography

## Definition

The location of Turkey (within the rectangle) in reference to the European continent. Anatolia roughly corresponds to the Asian part of Turkey

1907 map of Asia Minor, showing the local ancient kingdoms. The map includes the East Aegean Islands and the island of Cyprus to Anatolia’s continental shelf.

The Anatolian peninsula, also called Asia Minor, is bounded by the Black Sea to the north, the Mediterranean Sea to the south, the Aegean Sea to the west, and the Sea of Marmara to the northwest, which separates Anatolia from Thrace in Europe. The Encyclopedia Britannica defines it as “the peninsula of land that today constitutes the Asian portion of Turkey.”[8]

Traditionally, Anatolia is considered to extend in the east to an indefinite line running from the Gulf of Alexandretta to the Black Sea, coterminous with the Anatolian Plateau. This traditional geographical definition is used, for example, in the latest edition of Merriam-Webster’s Geographical Dictionary,[1] Under this definition, Anatolia is bounded to the east by the Armenian Highlands, and the Euphrates before that river bends to the southeast to enter Mesopotamia.[2][not in citation given] To the southeast, it is bounded by the ranges that separate it from the Orontes valley in Syria (region) and the Mesopotamian plain.[2]

Following the Armenian Genocide and establishment of the Republic of Turkey, the Armenian Highlands (or Western Armenia) were renamed “Eastern Anatolia” (literally The Eastern East) by the Turkish government,[9][10] being effectively co-terminous with Asian Turkey. Turkey’s First Geography Congress in 1941 created two regions to the east of the Gulf of Iskenderun-Black Sea line named the Eastern Anatolia Region and the Southeastern Anatolia Region,[11] the former largely corresponding to the western part of the Armenian Highland, the latter to the northern part of the Mesopotamian plain. Vazken Davidian terms the expanded use of “Anatolia” to apply to territory formerly referred to as Armenia as “an historical imposition”, and notes that a growing body of literature is uncomfortable with referring to the Ottoman East as “Eastern Anatolia”.[12]

## Onomastics and etymology

The oldest known reference to Anatolia – as “Land of the Hatti” – appears on Mesopotamian cuneiform tablets from the period of the Akkadian Empire (2350–2150 BC).[citation needed] The first recorded name the Greeks used for the Anatolian peninsula, Ἀσία (Asía),[13] presumably echoed the name of the Assuwa league in western Anatolia.[citation needed] As the name “Asia” broadened its scope to apply to other areas east of the Mediterranean, Greeks in Late Antiquity came to use the name Μικρὰ Ἀσία (Mikrá Asía) or Asia Minor, meaning “Lesser Asia” to refer to present-day Anatolia.

The English-language name Anatolia itself derives from the Greek ἀνατολή (anatolḗ) meaning “the East” or more literally “sunrise” (comparable to the Latin-derived terms “levant” and “orient”).[14][15] The precise reference of this term has varied over time, perhaps originally referring to the Aeolian, Ionian and Dorian colonies on the west coast of Asia Minor. In the Byzantine Empire, the Anatolic Theme (Ἀνατολικόν θέμα) was a theme covering the western and central parts of Turkey’s present-day Central Anatolia Region.[16][17]

The term “Anatolia” is Medieval Latin.[18]

The modern Turkish form of Anatolia, Anadolu, derives from the Greek name Aνατολή (Anatolḗ). The Russian male name Anatoly and the French Anatole share the same linguistic origin.

In English the name of Turkey for ancient Anatolia first appeared c. 1369. It derives from the Medieval Latin Turchia (meaning “Land of the Turks”, Turkish Türkiye), a name originally used by Europeans to designate those parts of Anatolia controlled by the Seljuk Sultanate of Rum after the Battle of Manzikert (1071).[citation needed]

## History

### Prehistory

Mural of aurochs, a deer, and humans in Çatalhöyük, which is the largest and best-preserved Neolithic site found to date. It was registered as a UNESCO World Heritage Site in 2012.[19]

Human habitation in Anatolia dates back to the Paleolithic.[20]Neolithic Anatolia has been proposed as the homeland of the Indo-European language family, although linguists tend to favour a later origin in the steppes north of the Black Sea. However, it is clear that the Anatolian languages, the oldest attested branch of Indo-European, have been spoken in Anatolia since at least the 19th century BC.[citation needed]

### Ancient Near East (Bronze and Iron Ages)

#### Hattians and Hurrians

The earliest historical records of Anatolia stem from the southeast of the region and are from the Mesopotamian-based Akkadian Empire during the reign of Sargon of Akkad in the 24th century BC. Scholars generally believe the earliest indigenous populations of Anatolia were the Hattians and Hurrians. The Hattians spoke a language of unclear affiliation, and the Hurrian language belongs to a small family called Hurro-Urartian, all these languages now being extinct; relationships with indigenous languages of the Caucasus have been proposed[21] but are not generally accepted. The region was famous for exporting raw materials, and areas of Hattian- and Hurrian-populated southeast Anatolia were colonised by the Akkadians.[22]

#### Assyrian Empire (21st–18th centuries BC)

After the fall of the Akkadian Empire in the mid-21st century BC, the Assyrians, who were the northern branch of the Akkadian people, colonised parts of the region between the 21st and mid-18th centuries BC and claimed its resources, notably silver. One of the numerous cuneiform records dated circa 20th century BC, found in Anatolia at the Assyrian colony of Kanesh, uses an advanced system of trading computations and credit lines.[22]

#### Hittite Kingdom and Empire (17th–12th centuries BC)

The Lion Gate at Hattusa, capital of the Hittite Empire. The city’s history dates to before 2000 BC.

Unlike the Akkadians and their descendants, the Assyrians, whose Anatolian possessions were peripheral to their core lands in Mesopotamia, the Hittites were centred at Hattusa (modern Boğazkale) in north-central Anatolia by the 17th century BC. They were speakers of an Indo-European language, the Hittite language, or nesili (the language of Nesa) in Hittite. The Hittites originated of local ancient cultures that grew in Anatolia, in addition to the arrival of Indo-European languages. Attested for the first time in the Assyrian tablets of Nesa around 2000BC, they conquered Hattusa in the 18th century BC, imposing themselves over Hattian- and Hurrian-speaking populations. According to the widely accepted Kurgan theory on the Proto-Indo-European homeland, however, the Hittites (along with the other Indo-European ancient Anatolians) were themselves relatively recent immigrants to Anatolia from the north. However, they did not necessarily displace the population genetically, they would rather assimilate into the former peoples’ culture, preserving the Hittite language however.

The Hittites adopted the cuneiform script, invented in Mesopotamia. During the Late Bronze Age circa 1650 BC, they created a kingdom, the Hittite New Kingdom, which became an empire in the 14th century BC after the conquest of Kizzuwatna in the south-east and the defeat of the Assuwa league in western Anatolia. The empire reached its height in the 13th century BC, controlling much of Asia Minor, northwestern Syria and northwest upper Mesopotamia. They failed to reach the Anatolian coasts of the Black Sea, however, as a non-Indo-European people, the semi-nomadic pastoralist and tribal Kaskians, had established themselves there, displacing earlier Palaic-speaking Indo-Europeans.[23] Much of the history of the Hittite Empire concerned war with the rival empires of Egypt, Assyria and the Mitanni.[24]

The Egyptians eventually withdrew from the region after failing to gain the upper hand over the Hittites and becoming wary of the power of Assyria, which had destroyed the Mitanni Empire.[24] The Assyrians and Hittites were then left to battle over control of eastern and southern Anatolia and colonial territories in Syria. The Assyrians had better success than the Egyptians, annexing much Hittite (and Hurrian) territory in these regions.[25]

#### Neo-Hittite kingdoms (c. 1180–700 BC)

After 1180 BC, during the Late Bronze Age collapse, the Hittite empire disintegrated into several independent Syro-Hittite states, subsequent to losing much territory to the Middle Assyrian Empire and being finally overrun by the Phrygians, another Indo-European people who are believed to have migrated from the Balkans. The Phrygian expansion into southeast Anatolia was eventually halted by the Assyrians, who controlled that region.[25]

Arameans

Arameans encroached over the borders of south central Anatolia in the century or so after the fall of the Hittite empire, and some of the Syro-Hittite states in this region became an amalgam of Hittites and Arameans. These became known as Syro-Hittite states.

Luwians

Lycian rock cut tombs of Kaunos (Dalyan)

In central and western Anatolia, another Indo-European people, the Luwians, came to the fore, circa 2000 BC. Their language was closely related to Hittite.[26] The general consensus amongst scholars is that Luwian was spoken—to a greater or lesser degree—across a large area of western Anatolia, including (possibly) Wilusa (Troy), the Seha River Land (to be identified with the Hermos and/or Kaikos valley), and the kingdom of Mira-Kuwaliya with its core territory of the Maeander valley.[27] From the 9th century BC, Luwian regions coalesced into a number of states such as Lydia, Caria and Lycia, all of which had Hellenic influence.

#### Neo-Assyrian Empire (10th–7th centuries BC)

From the 10th to late 7th centuries BC, much of Anatolia (particularly the east, central, and southeastern regions) fell to the Neo-Assyrian Empire, including all of the Syro-Hittite states, Tabal, Kingdom of Commagene, the Cimmerians and Scythians and swathes of Cappadocia.

The Neo-Assyrian empire collapsed due to a bitter series of civil wars followed by a combined attack by Medes, Persians, Scythians and their own Babylonian relations. The last Assyrian city to fall was Harran in southeast Anatolia. This city was the birthplace of the last king of Babylon, the Assyrian Nabonidus and his son and regent Belshazzar. Much of the region then fell to the short-lived Iran-based Median Empire, with the Babylonians and Scythians briefly appropriating some territory.

#### Cimmerian and Scythian invasions (8th–7th centuries BC)

From the late 8th century BC, a new wave of Indo-European-speaking raiders entered northern and northeast Anatolia: the Cimmerians and Scythians. The Cimmerians overran Phrygia and the Scythians threatened to do the same to Urartu and Lydia, before both were finally checked by the Assyrians.

#### Greek West

Portrait of an Achaemenid Satrap of Asia Minor (Heraclea, in Bithynia), end of 6th century BCE, probably under Darius I.[28]

The north-western coast of Anatolia was inhabited by Greeks of the Achaean/Mycenaean culture from the 20th century BC, related to the Greeks of south eastern Europe and the Aegean.[29] Beginning with the Bronze Age collapse at the end of the 2nd millennium BC, the west coast of Anatolia was settled by Ionian Greeks, usurping the area of the related but earlier Mycenaean Greeks. Over several centuries, numerous Ancient Greek city-states were established on the coasts of Anatolia. Greeks started Western philosophy on the western coast of Anatolia (Pre-Socratic philosophy).[29]

### Classical Antiquity

Ancient regions of Anatolia (500 BC)

Asia Minor in the Greco-Roman period. The classical regions and their main settlements

Asia Minor in the early 2nd century AD. The Roman provinces under Trajan.

The temple of Athena (funded by Alexander the Great) in the ancient Greek city of Priene

In classical antiquity, Anatolia was described by Herodotus and later historians as divided into regions named after tribes such as Lydia, Lycia, Caria, Mysia, Bithynia, Phrygia, Galatia, Lycaonia, Pisidia, Paphlagonia, Cilicia, and Cappadocia. By that time, the populations were a mixture of the ancient Anatolian or “Syro-Hittite” substrate and post-Bronze-Age-collapse “Thraco-Phrygian” and more recent Greco-Macedonian incursions.

The Dying Galatian was a famous statue commissioned some time between 230–220 BC by King Attalos I of Pergamon to honor his victory over the Celtic Galatians in Anatolia.

Anatolia is known as the birthplace of minted coinage (as opposed to unminted coinage, which first appears in Mesopotamia at a much earlier date) as a medium of exchange, some time in the 7th century BC in Lydia. The use of minted coins continued to flourish during the Greek and Roman eras.[30][31]

During the 6th century BC, all of Anatolia was conquered by the Persian Achaemenid Empire, the Persians having usurped the Medes as the dominant dynasty in Iran. In 499 BC, the Ionian city-states on the west coast of Anatolia rebelled against Persian rule. The Ionian Revolt, as it became known, though quelled, initiated the Greco-Persian Wars, which ended in a Greek victory in 449 BC, and the Ionian cities regained their independence, alongside the withdrawal of the Persian forces from their European territories.

In 334 BC, the Macedonian Greek king Alexander the Great conquered the peninsula from the Achaemenid Persian Empire.[32] Alexander’s conquest opened up the interior of Asia Minor to Greek settlement and influence.

Sanctuary of Commagene Kings on Mount Nemrut (1st century BC)

Following the death of Alexander and the breakup of his empire, Anatolia was ruled by a series of Hellenistic kingdoms, such as the Attalids of Pergamum and the Seleucids, the latter controlling most of Anatolia. A period of peaceful Hellenization followed, such that the local Anatolian languages had been supplanted by Greek by the 1st century BC. In 133 BC the last Attalid king bequeathed his kingdom to the Roman Republic, and western and central Anatolia came under Roman control, but Hellenistic culture remained predominant. Further annexations by Rome, in particular of the Kingdom of Pontus by Pompey, brought all of Anatolia under Roman control, except for the eastern frontier with the Parthian Empire, which remained unstable for centuries, causing a series of wars, culminating in the Roman-Parthian Wars.

### Early Christian Period

After the division of the Roman Empire, Anatolia became part of the East Roman, or Byzantine Empire. Anatolia was one of the first places where Christianity spread, so that by the 4th century AD, western and central Anatolia were overwhelmingly Christian and Greek-speaking. For the next 600 years, while Imperial possessions in Europe were subjected to barbarian invasions, Anatolia would be the center of the Hellenic world. Byzantine control was challenged by Arab raids starting in the eighth century (see Arab–Byzantine wars), but in the ninth and tenth century a resurgent Byzantine Empire regained its lost territories, including even long lost territory such as Armenia and Syria (ancient Aram).

### Late Medieval Period

Byzantine Anatolia and the Byzantine-Arab frontier zone in the mid-9th century

Beyliks and other states around Anatolia, c. 1300.

In the 10 years following the Battle of Manzikert in 1071, the Seljuk Turks from Central Asia migrated over large areas of Anatolia, with particular concentrations around the northwestern rim.[33] The Turkish language and the Islamic religion were gradually introduced as a result of the Seljuk conquest, and this period marks the start of Anatolia’s slow transition from predominantly Christian and Greek-speaking, to predominantly Muslim and Turkish-speaking (although ethnic groups such as Armenians, Greeks, and Assyrians remained numerous and retained Christianity and their native languages). In the following century, the Byzantines managed to reassert their control in western and northern Anatolia. Control of Anatolia was then split between the Byzantine Empire and the Seljuk Sultanate of Rûm, with the Byzantine holdings gradually being reduced.[34]

In 1255, the Mongols swept through eastern and central Anatolia, and would remain until 1335. The Ilkhanate garrison was stationed near Ankara.[34][35] After the decline of the Ilkhanate from 1335–1353, the Mongol Empire’s legacy in the region was the Uyghur Eretna Dynasty that was overthrown by Kadi Burhan al-Din in 1381.[36]

By the end of the 14th century, most of Anatolia was controlled by various Anatolian beyliks. Smyrna fell in 1330, and the last Byzantine stronghold in Anatolia, Philadelphia, fell in 1390. The Turkmen Beyliks were under the control of the Mongols, at least nominally, through declining Seljuk sultans.[37][38] The Beyliks did not mint coins in the names of their own leaders while they remained under the suzerainty of the Mongol Ilkhanids.[39] The Osmanli ruler Osman I was the first Turkish ruler who minted coins in his own name in 1320s, for it bears the legend “Minted by Osman son of Ertugul”.[40] Since the minting of coins was a prerogative accorded in Islamic practice only to a sovereign, it can be considered that the Osmanli, or Ottoman Turks, became formally independent from the Mongol Khans.[41]

### Ottoman Empire

Among the Turkish leaders, the Ottomans emerged as great power under Osman I and his son Orhan I.[42][43] The Anatolian beyliks were successively absorbed into the rising Ottoman Empire during the 15th century.[44] It is not well understood how the Osmanlı, or Ottoman Turks, came to dominate their neighbours, as the history of medieval Anatolia is still little known.[45] The Ottomans completed the conquest of the peninsula in 1517 with the taking of Halicarnassus (modern Bodrum) from the Knights of Saint John.[46]

### Modern times

Ethnographic map of Anatolia from 1911.

With the acceleration of the decline of the Ottoman Empire in the early 19th century, and as a result of the expansionist policies of the Russian Empire in the Caucasus, many Muslim nations and groups in that region, mainly Circassians, Tatars, Azeris, Lezgis, Chechens and several Turkic groups left their homelands and settled in Anatolia. As the Ottoman Empire further shrank in the Balkan regions and then fragmented during the Balkan Wars, much of the non-Christian populations of its former possessions, mainly Balkan Muslims (Bosnian Muslims, Albanians, Turks, Muslim Bulgarians and Greek Muslims such as the Vallahades from Greek Macedonia), were resettled in various parts of Anatolia, mostly in formerly Christian villages throughout Anatolia.

A continuous reverse migration occurred since the early 19th century, when Greeks from Anatolia, Constantinople and Pontus area migrated toward the newly independent Kingdom of Greece, and also towards the United States, southern part of the Russian Empire, Latin America and rest of Europe.

Following the Russo-Persian Treaty of Turkmenchay (1828) and the incorporation of the Eastern Armenia into the Russian Empire, another migration involved the large Armenian population of Anatolia, which recorded significant migration rates from Western Armenia (Eastern Anatolia) toward the Russian Empire, especially toward its newly established Armenian provinces.

Anatolia remained multi-ethnic until the early 20th century (see the rise of nationalism under the Ottoman Empire). During World War I, the Armenian Genocide, the Greek genocide (especially in Pontus), and the Assyrian genocide almost entirely removed the ancient indigenous communities of Armenian, Greek, and Assyrian populations in Anatolia and surrounding regions. Following the Greco-Turkish War of 1919–1922, most remaining ethnic Anatolian Greeks were forced out during the 1923 population exchange between Greece and Turkey. Many more have left Turkey since, leaving fewer than 5,000 Greeks in Anatolia today. Since the foundation of the Republic of Turkey in 1923, Anatolia has been within Turkey, its inhabitants being mainly Turks and Kurds (see demographics of Turkey and history of Turkey).

## Geology

Anatolia’s terrain is structurally complex. A central massif composed of uplifted blocks and downfolded troughs, covered by recent deposits and giving the appearance of a plateau with rough terrain, is wedged between two folded mountain ranges that converge in the east. True lowland is confined to a few narrow coastal strips along the Aegean, Mediterranean, and Black Sea coasts. Flat or gently sloping land is rare and largely confined to the deltas of the Kızıl River, the coastal plains of Çukurova and the valley floors of the Gediz River and the Büyük Menderes River as well as some interior high plains in Anatolia, mainly around Lake Tuz (Salt Lake) and the Konya Basin (Konya Ovasi).

### Climate

Anatolia has a varied range of climates. The central plateau is characterized by a continental climate, with hot summers and cold snowy winters. The south and west coasts enjoy a typical Mediterranean climate, with mild rainy winters, and warm dry summers.[47] The Black Sea and Marmara coasts have a temperate oceanic climate, with cool foggy summers and much rainfall throughout the year.

### Ecoregions

There is a diverse number of plant and animal communities.

The mountains and coastal plain of northern Anatolia experiences humid and mild climate. There are temperate broadleaf, mixed and coniferous forests. The central and eastern plateau, with its drier continental climate, has deciduous forests and forest steppes. Western and southern Anatolia, which have a Mediterranean climate, contain Mediterranean forests, woodlands, and scrub ecoregions.

• Euxine-Colchic deciduous forests: These temperate broadleaf and mixed forests extend across northern Anatolia, lying between the mountains of northern Anatolia and the Black Sea. They include the enclaves of temperate rainforest lying along the southeastern coast of the Black Sea in eastern Turkey and Georgia.[48]
• Northern Anatolian conifer and deciduous forests: These forests occupy the mountains of northern Anatolia, running east and west between the coastal Euxine-Colchic forests and the drier, continental climate forests of central and eastern Anatolia.[49]
• Central Anatolian deciduous forests: These forests of deciduous oaks and evergreen pines cover the plateau of central Anatolia.[50]
• Central Anatolian steppe: These dry grasslands cover the drier valleys and surround the saline lakes of central Anatolia, and include halophytic (salt tolerant) plant communities.[51]
• Eastern Anatolian deciduous forests: This ecoregion occupies the plateau of eastern Anatolia. The drier and more continental climate is beneficial for steppe-forests dominated by deciduous oaks, with areas of shrubland, montane forest, and valley forest.[52]
• Anatolian conifer and deciduous mixed forests: These forests occupy the western, Mediterranean-climate portion of the Anatolian plateau. Pine forests and mixed pine and oak woodlands and shrublands are predominant.[53]
• Aegean and Western Turkey sclerophyllous and mixed forests: These Mediterranean-climate forests occupy the coastal lowlands and valleys of western Anatolia bordering the Aegean Sea. The ecoregion has forests of Turkish pine (Pinus brutia), oak forests and woodlands, and maquis shrubland of Turkish pine and evergreen sclerophyllous trees and shrubs, including Olive (Olea europaea), Strawberry Tree (Arbutus unedo), Arbutus andrachne, Kermes Oak (Quercus coccifera), and Bay Laurel (Laurus nobilis).[54]
• Southern Anatolian montane conifer and deciduous forests: These mountain forests occupy the Mediterranean-climate Taurus Mountains of southern Anatolia. Conifer forests are predominant, chiefly Anatolian black pine (Pinus nigra), Cedar of Lebanon (Cedrus libani), Taurus fir (Abies cilicica), and juniper (Juniperus foetidissima and J. excelsa). Broadleaf trees include oaks, hornbeam, and maples.[55]
• Eastern Mediterranean conifer-sclerophyllous-broadleaf forests: This ecoregion occupies the coastal strip of southern Anatolia between the Taurus Mountains and the Mediterranean Sea. Plant communities include broadleaf sclerophyllous maquis shrublands, forests of Aleppo Pine (Pinus halepensis) and Turkish Pine (Pinus brutia), and dry oak (Quercus spp.) woodlands and steppes.[56]

## Demographics

Almost 80% of the people currently residing in Anatolia are Turks. Kurds constitute a major community in southeastern Anatolia,[57] and are the largest ethnic minority. Abkhazians, Albanians, Arabs, Arameans, Armenians, Assyrians, Azerbaijanis, Bosnian Muslims, Circassians, Gagauz, Georgians, Serbs, Greeks, Hemshin, Jews, Laz, Levantines, Pomaks, Zazas and a number of other ethnic groups also live in Anatolia in smaller numbers.[citation needed]

## Cuisine

Bamia is a traditional Anatolian-era stew dish prepared using lamb, okra and tomatoes as primary ingredients.[58]

• Aeolis
• Alacahöyük
• Anatolian hypothesis
• Anatolian languages
• Anatolianism
• Anatolian leopard
• Anatolian Plate
• Anatolian Shepherd
• Anatolian beyliks
• Ancient kingdoms of Anatolia
• Antigonid dynasty
• Attalid dynasty
• Bithynia
• Byzantine Empire
• Caria
• Çatalhöyük
• Cilicia
• Doris (Asia Minor)
• Empire of Nicaea
• Empire of Trebizond
• Ephesus
• Galatia
• Gordium
• Halicarnassus
• Hattusa
• History of Anatolia
• Hittites
• Ionia
• Lycaonia
• Lycia
• Lydia
• Midas
• Miletus
• Myra
• Mysia
• Ottoman Empire
• Pamphylia
• Paphlagonia
• Pentarchy
• Pergamon
• Phrygia
• Pisidia
• Pontic Greeks
• Pontus
• Rumi
• Saint Anatolia
• Saint John
• Saint Nicholas
• Saint Paul
• Sardis
• Seleucid Empire
• Great Seljuq Empire
• Seven churches of Asia
• Seven Sleepers
• Sultanate of Rum
• Tarsus
• Troy
• Turkey
• Turkic migration

## References

1. ^ ab Merriam-Webster’s Geographical Dictionary. 2001. p. 46. ISBN 0 87779 546 0. Retrieved 18 May 2001..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:”””””””‘””‘”}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url(“//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png”)no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url(“//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png”)no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url(“//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png”)no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em}
2. ^ abc Stephen Mitchell, Anatolia: Land, Men, and Gods in Asia Minor. The Celts in Anatolia and the impact of Roman rule. Clarendon Press, Aug 24, 1995 – 266 pages.
ISBN 978-0198150299 [1]
3. ^ Sansal, Burak. “History of Anatolia”.
4. ^ (TÜİK), Türkiye İstatistik Kurumu. “Türkiye İstatistik Kurumu, Adrese Dayalı Nüfus Kayıt Sistemi Sonuçları, 2015”. www.tuik.gov.tr.
5. ^ Adalian, Rouben Paul (2010). Historical dictionary of Armenia (2nd ed.). Lanham, MD: Scarecrow Press. pp. 336–8. ISBN 0810874504.
6. ^ Grierson, Otto Mørkholm ; edited by Philip; Westermark, Ulla (1991). Early Hellenistic coinage : from the accession of Alexander to the Peace of Apamea (336–188 B.C.) (Repr. ed.). Cambridge: Cambridge University Press. p. 175. ISBN 0521395046.
7. ^ Hooglund, Eric (2004). “Anatolia”. Encyclopedia of the Modern Middle East and North Africa. Macmillan/Gale – via Encyclopedia.com. Anatolia comprises more than 95 percent of Turkey’s total land area.
8. ^ “Anatolia – History, Map, & Facts”. Encyclopedia Britannica. Retrieved 2018-11-23.
9. ^ Sahakyan, Lusine (2010). Turkification of the Toponyms in the Ottoman Empire and the Republic of Turkey. Montreal: Arod Books. ISBN 978-0969987970.
10. ^ Hovannisian, Richard (2007). The Armenian genocide cultural and ethical legacies. New Brunswick, N.J.: Transaction Publishers. p. 3. ISBN 1412835925.
11. ^ Ali Yiğit, “Geçmişten Günümüze Türkiye’yi Bölgelere Ayıran Çalışmalar ve Yapılması Gerekenler”, Ankara Üniversitesi Türkiye Coğrafyası Araştırma ve Uygulama Merkezi, IV. Ulural Coğrafya Sempozyumu, “Avrupa Birliği Sürecindeki Türkiye’de Bölgesel Farklılıklar”, pp. 34–35.
12. ^ Vazken Khatchig Davidian, “Imagining Ottoman Armenia: Realism and Allegory in Garabed Nichanian’s Provincial Wedding in Moush and Late Ottoman Art Criticism”, p7 & footnote 34, in Études arméniennes contemporaines volume 6, 2015.
13. ^ Henry George Liddell, Robert Scott, Ἀσία, A Greek-English Lexicon, on Perseus
14. ^ Henry George Liddell; Robert Scott. “A Greek-English Lexicon”.
15. ^ “Online Etymology Dictionary”.
16. ^ “On the First Thema, called Anatolikón. This theme is called Anatolikón or Theme of the Anatolics, not because it is above and in the direction of the east where the sun rises, but because it lies to the East of Byzantium and Europe.” Constantine VII Porphyrogenitus, De Thematibus, ed. A. Pertusi. Vatican: Vatican Library, 1952, pp. 59–61.
17. ^
John Haldon, Byzantium, a History, 2002. Page 32
18. ^ Anatolia – Online Etymology Dictionary
19. ^ “Çatalhöyük added to UNESCO World Heritage List”. Global Heritage Fund. 3 July 2012. Archived from the original on January 17, 2013. Retrieved 9 February 2013.
20. ^ Stiner, Mary C.; Kuhn, Steven L.; Güleç, Erksin (2013). “Early Upper Paleolithic shell beads at Üçağızlı Cave I (Turkey): Technology and the socioeconomic context of ornament life-histories”. Journal of Human Evolution. 64 (5): 380–398. doi:10.1016/j.jhevol.2013.01.008. ISSN 0047-2484. PMID 23481346.
21. ^ Bryce 2005:12
22. ^ ab Freeman, Charles (1999). Egypt, Greece and Rome: Civilizations of the Ancient Mediterranean. Oxford University Press. ISBN 0-19-872194-3.
23. ^ Carruba, O. Das Palaische. Texte, Grammatik, Lexikon. Wiesbaden: Harrassowitz, 1970. StBoT 10
24. ^ ab Georges Roux – Ancient Iraq
25. ^ ab Georges Roux, Ancient Iraq. Penguin Books, 1966.
26. ^ Melchert 2003
27. ^ Watkins 1994; id. 1995:144–51; Starke 1997; Melchert 2003; for the geography Hawkins 1998
28. ^ CAHN, HERBERT A.; GERIN, DOMINIQUE (1988). Themistocles at Magnesia. pp. 20 & Plate 3.
29. ^ ab Carl Roebuck, The World of Ancient Times
30. ^ Howgego, C. J. (1995). Ancient History from Coins. ISBN 0-415-08992-1.
31. ^ Asia Minor Coins – an index of Greek and Roman coins from Asia Minor (ancient Anatolia)
32. ^ Roisman, Joseph; Worthington, Ian (2010). A Companion to Ancient Macedonia. John Wiley and Sons. ISBN 1-4051-7936-8.
33. ^ Angold, Michael (1997). The Byzantine Empire 1025–1204. p. 117. ISBN 0-582-29468-1.
34. ^ ab H. M. Balyuzi Muḥammad and the course of Islám, p. 342
35. ^ John Freely Storm on Horseback: The Seljuk Warriors of Turkey, p. 83
36. ^ Clifford Edmund Bosworth-The new Islamic dynasties: a chronological and genealogical manual, p. 234
37. ^ Mehmet Fuat Köprülü, Gary Leiser-The origins of the Ottoman Empire, p. 33
38. ^ Peter Partner God of battles: holy wars of Christianity and Islam, p. 122
39. ^ Osman’s Dream: The History of the Ottoman Empire, p. 13
40. ^ Artuk – Osmanli Beyliginin Kurucusu, 27f
41. ^ Pamuk – A Monetary History, pp. 30–31
42. ^ “Osman I | Ottoman sultan”. Encyclopedia Britannica. Retrieved 2018-04-23.
43. ^ “Orhan | Ottoman sultan”. Encyclopedia Britannica. Retrieved 2018-04-23.
44. ^ Fleet, Kate. “The rise of the Ottomans (Chapter 11) – The New Cambridge History of Islam”. Cambridge Core. Retrieved 2018-04-23.
45. ^ Finkel, Caroline (2007). Osman’s Dream: The History of the Ottoman Empire. Basic Books. p. 5. ISBN 978-0-465-00850-6. Retrieved 6 June 2013.
46. ^ electricpulp.com. “HALICARNASSUS – Encyclopaedia Iranica”. www.iranicaonline.org. Retrieved 2018-04-23.
47. ^ Prothero, W.G. (1920). Anatolia. London: H.M. Stationery Office.
48. ^ “Euxine-Colchic deciduous forests”. Terrestrial Ecoregions. World Wildlife Fund. Retrieved May 25, 2008.
49. ^ “Northern Anatolian conifer and deciduous forests”. Terrestrial Ecoregions. World Wildlife Fund. Retrieved May 25, 2008.
50. ^ “Central Anatolian deciduous forests”. Terrestrial Ecoregions. World Wildlife Fund. Retrieved May 25, 2008.
51. ^ “Central Anatolian steppe”. Terrestrial Ecoregions. World Wildlife Fund. Retrieved May 25, 2008.
52. ^ “Eastern Anatolian deciduous forests”. Terrestrial Ecoregions. World Wildlife Fund. Retrieved May 25, 2008.
53. ^ “Anatolian conifer and deciduous mixed forests”. Terrestrial Ecoregions. World Wildlife Fund. Retrieved May 25, 2008.
54. ^ “Aegean and Western Turkey sclerophyllous and mixed forests”. Terrestrial Ecoregions. World Wildlife Fund. Retrieved May 25, 2008.
55. ^ “Southern Anatolian montane conifer and deciduous forests”. Terrestrial Ecoregions. World Wildlife Fund. Retrieved May 25, 2008.
56. ^ “Eastern Mediterranean conifer-sclerophyllous-broadleaf forests”. Terrestrial Ecoregions. World Wildlife Fund. Retrieved May 25, 2008.
57. ^ “A Kurdish Majority In Turkey Within One Generation?”. May 6, 2012.
58. ^ Webb, L.S.; Roten, L.G. (2009). The Multicultural Cookbook for Students. EBL-Schweitzer. ABC-CLIO. pp. 286–287. ISBN 978-0-313-37559-0.

## Bibliography

.mw-parser-output .refbegin{font-size:90%;margin-bottom:0.5em}.mw-parser-output .refbegin-hanging-indents>ul{list-style-type:none;margin-left:0}.mw-parser-output .refbegin-hanging-indents>ul>li,.mw-parser-output .refbegin-hanging-indents>dl>dd{margin-left:0;padding-left:3.2em;text-indent:-3.2em;list-style:none}.mw-parser-output .refbegin-100{font-size:100%}

• Steadman, Sharon R.; McMahon, Gregory (2011). McMahon, Gregory; Steadman, Sharon, eds. The Oxford Handbook of Ancient Anatolia:(10,000–323 BCE). Oxford University Press Inc. doi:10.1093/oxfordhb/9780195376142.001.0001. ISBN 9780195376142

• Akat, Uücel, Neşe Özgünel, and Aynur Durukan. 1991. Anatolia: A World Heritage. Ankara: Kültür Bakanliǧi.
• Brewster, Harry. 1993. Classical Anatolia: The Glory of Hellenism. London: I.B. Tauris.
• Donbaz, Veysel, and Şemsi Güner. 1995. The Royal Roads of Anatolia. Istanbul: Dünya.
• Dusinberre, Elspeth R. M. 2013. Empire, Authority, and Autonomy In Achaemenid Anatolia. Cambridge: Cambridge University Press.
• Gates, Charles, Jacques Morin, and Thomas Zimmermann. 2009. Sacred Landscapes In Anatolia and Neighboring Regions. Oxford: Archaeopress.
• Mikasa, Takahito, ed. 1999. Essays On Ancient Anatolia. Wiesbaden: Harrassowitz.
• Takaoğlu, Turan. 2004. Ethnoarchaeological Investigations In Rural Anatolia. İstanbul: Ege Yayınları.
• Taracha, Piotr. 2009. Religions of Second Millennium Anatolia. Wiesbaden: Harrassowitz.
• Taymaz, Tuncay, Y. Yilmaz, and Yildirim Dilek. 2007. The Geodynamics of the Aegean and Anatolia. London: Geological Society.

## Why should makefiles have an “install” target?

Clash Royale CLAN TAG#URR8PPP

Coming from the world of C and C++, most build system have an install target, notably Makefiles (where it is recommended by GNU for example) or CMake. This target copies the runtime files (executables, libraries, …) in the operating system (for example, in C:Program Files on Windows).

This feels really hacky, since for me it is not the responsibility of the build system to install programs (which is actually the responsibility of the operating system / package manager). It also means the build system or build script must know the organization of installed programs, with environment variables, registry variables, symlinks, permissions, etc.

At best, build systems should have a release target that will output an installable program (for example .deb or .msi), and then kindly ask the operating system to install that program. It would also allow the user to uninstall without having to type make uninstall.

So, my question: why do build system usually recommend having an install target?

• Your arguing that “make install” does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
– pmf
Nov 23 at 16:05

• Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the “core OS/package management system”. No idea whether Windows has some similar convention though.
– Bakuriu
Nov 23 at 19:12

• “This feels really hacky.” Well, what did you expect from the world of C/C++? 😉
– Mason Wheeler
Nov 23 at 19:53

• Note that make install makes no sense when we talk about cross-compiling
– Hagen von Eitzen
Nov 24 at 13:41

• @HagenvonEitzen it does with DESTDIR.
– Nax
Nov 24 at 14:25

Coming from the world of C and C++, most build system have an install target, notably Makefiles (where it is recommended by GNU for example) or CMake. This target copies the runtime files (executables, libraries, …) in the operating system (for example, in C:Program Files on Windows).

This feels really hacky, since for me it is not the responsibility of the build system to install programs (which is actually the responsibility of the operating system / package manager). It also means the build system or build script must know the organization of installed programs, with environment variables, registry variables, symlinks, permissions, etc.

At best, build systems should have a release target that will output an installable program (for example .deb or .msi), and then kindly ask the operating system to install that program. It would also allow the user to uninstall without having to type make uninstall.

So, my question: why do build system usually recommend having an install target?

• Your arguing that “make install” does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
– pmf
Nov 23 at 16:05

• Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the “core OS/package management system”. No idea whether Windows has some similar convention though.
– Bakuriu
Nov 23 at 19:12

• “This feels really hacky.” Well, what did you expect from the world of C/C++? 😉
– Mason Wheeler
Nov 23 at 19:53

• Note that make install makes no sense when we talk about cross-compiling
– Hagen von Eitzen
Nov 24 at 13:41

• @HagenvonEitzen it does with DESTDIR.
– Nax
Nov 24 at 14:25

2

Coming from the world of C and C++, most build system have an install target, notably Makefiles (where it is recommended by GNU for example) or CMake. This target copies the runtime files (executables, libraries, …) in the operating system (for example, in C:Program Files on Windows).

This feels really hacky, since for me it is not the responsibility of the build system to install programs (which is actually the responsibility of the operating system / package manager). It also means the build system or build script must know the organization of installed programs, with environment variables, registry variables, symlinks, permissions, etc.

At best, build systems should have a release target that will output an installable program (for example .deb or .msi), and then kindly ask the operating system to install that program. It would also allow the user to uninstall without having to type make uninstall.

So, my question: why do build system usually recommend having an install target?

Coming from the world of C and C++, most build system have an install target, notably Makefiles (where it is recommended by GNU for example) or CMake. This target copies the runtime files (executables, libraries, …) in the operating system (for example, in C:Program Files on Windows).

This feels really hacky, since for me it is not the responsibility of the build system to install programs (which is actually the responsibility of the operating system / package manager). It also means the build system or build script must know the organization of installed programs, with environment variables, registry variables, symlinks, permissions, etc.

At best, build systems should have a release target that will output an installable program (for example .deb or .msi), and then kindly ask the operating system to install that program. It would also allow the user to uninstall without having to type make uninstall.

So, my question: why do build system usually recommend having an install target?

build-system cmake make install

edited Nov 24 at 11:54

Synxis

1976

1976

• Your arguing that “make install” does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
– pmf
Nov 23 at 16:05

• Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the “core OS/package management system”. No idea whether Windows has some similar convention though.
– Bakuriu
Nov 23 at 19:12

• “This feels really hacky.” Well, what did you expect from the world of C/C++? 😉
– Mason Wheeler
Nov 23 at 19:53

• Note that make install makes no sense when we talk about cross-compiling
– Hagen von Eitzen
Nov 24 at 13:41

• @HagenvonEitzen it does with DESTDIR.
– Nax
Nov 24 at 14:25

• Your arguing that “make install” does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
– pmf
Nov 23 at 16:05

• Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the “core OS/package management system”. No idea whether Windows has some similar convention though.
– Bakuriu
Nov 23 at 19:12

• “This feels really hacky.” Well, what did you expect from the world of C/C++? 😉
– Mason Wheeler
Nov 23 at 19:53

• Note that make install makes no sense when we talk about cross-compiling
– Hagen von Eitzen
Nov 24 at 13:41

• @HagenvonEitzen it does with DESTDIR.
– Nax
Nov 24 at 14:25

7

Your arguing that “make install” does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
– pmf
Nov 23 at 16:05

Your arguing that “make install” does not fall within the responsibility of a build system, but the much more involved and platform specific responsibility of creating an installable package does.
– pmf
Nov 23 at 16:05

2

Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the “core OS/package management system”. No idea whether Windows has some similar convention though.
– Bakuriu
Nov 23 at 19:12

Anyway: sometimes you want to install a n application that is not handled by the OS/package manager (because it has dependencies that would cause conflicts impossible to resolve using the package manager etc). make install usually installs under /usr/local (or even /opt) which are directories not handled by the “core OS/package management system”. No idea whether Windows has some similar convention though.
– Bakuriu
Nov 23 at 19:12

11

“This feels really hacky.” Well, what did you expect from the world of C/C++? 😉
– Mason Wheeler
Nov 23 at 19:53

“This feels really hacky.” Well, what did you expect from the world of C/C++? 😉
– Mason Wheeler
Nov 23 at 19:53

1

Note that make install makes no sense when we talk about cross-compiling
– Hagen von Eitzen
Nov 24 at 13:41

Note that make install makes no sense when we talk about cross-compiling
– Hagen von Eitzen
Nov 24 at 13:41

1

@HagenvonEitzen it does with DESTDIR.
– Nax
Nov 24 at 14:25

@HagenvonEitzen it does with DESTDIR.
– Nax
Nov 24 at 14:25

active

oldest

Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don’t have package managers. Plus, there are systems where make install actually is the preferred way of managing packages.

• I’m curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples)
– Synxis
Nov 23 at 14:13

• @Synxis BSD, Linux, Unix all use makefiles. Whether it’s preferred to use them for installation, I don’t know, but you often have that ability using make install.
– Rob
Nov 23 at 14:33

• In debian at least it’s preferred to use checkinstall over make install for two reasons: “You can easily remove the package with one step.” and “You can install the resulting package upon multiple machines.” – as checkinstall builds a .deb and installs it, it uses the package manager…
– Aaron Hall
Nov 23 at 17:01

• @Synxis – There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install
– slebetman
Nov 23 at 19:15

• @AaronHall Correct me if I’m wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it’s actions for package building.
– cmaster
Nov 24 at 10:26

A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s.

However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc…. These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable.

Try to use strings(1) on most executables of your system. You’ll find out which file paths are known to it.

BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged.

FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be “installed”; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable.

• DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that’s of course a Linux-specific solution. – amon Nov 23 at 14:50 • I’ve seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation. – Joshua Nov 23 at 15:34 • @amon: I’m not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems. – Kevin Nov 23 at 16:25 • @Joshua It shouldn’t, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue. – Nax Nov 24 at 14:27 There are several reasons which come to mind. • Many package creating software – the Debian build system for example, and IIRC rpm as well – already expect from the building script to “install” the program to some special subdirectory. So it is driven by backward compatibility in both directions. • A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it.
• There may still be environments which do not have packages.

• I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
– Synxis
Nov 23 at 14:15

One reason not mentioned is there’s a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using.

Let’s say you’re using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don’t want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let’s you install the modified 1.3.0 software without having to worry about creating a package and install it on your system.

Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance.

This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs.

To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience.

This is one of the “top of the food chain” problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others.

In the case of software installation management, the package manager is this component, and it will run the package’s build system, then take the output through a convenient interface (“files in a directory after an installation step”), generate a package and prepare it for upload to a repository.

The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both.

You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages.

With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work.

• You’ve said nothing about why there’s an install target, and it seems to me that most of what you’ve written would apply to it too…
– curiousdannii
Nov 24 at 11:45

• @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won.
– Simon Richter
Nov 25 at 13:06

Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It’s better for the application developer to write a script to properly install the application he’s written.

He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it’s become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="$pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy. New contributor JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct. StackExchange.ready(function () {$(“#show-editor-button input, #show-editor-button button”).click(function () {
var showEditor = function() {
$(“#show-editor-button”).hide();$(“#post-form”).removeClass(“dno”);
StackExchange.editor.finallyInit();
};

var useFancy = $(this).data(‘confirm-use-fancy’); if(useFancy == ‘True’) { var popupTitle =$(this).data(‘confirm-fancy-title’);
var popupBody = $(this).data(‘confirm-fancy-body’); var popupAccept =$(this).data(‘confirm-fancy-accept-button’);

$(this).loadPopup({ url: ‘/post/self-answer-popup’, loaded: function(popup) { var pTitle =$(popup).find(‘h2’);
var pBody = $(popup).find(‘.popup-body’); var pSubmit =$(popup).find(‘.popup-submit’);

pTitle.text(popupTitle);
pBody.html(popupBody);
pSubmit.val(popupAccept).click(showEditor);
}
})
} else{
var confirmText = $(this).data(‘confirm-text’); if (confirmText ? confirm(confirmText) : true) { showEditor(); } } }); }); ## 6 Answers6 active oldest votes ## 6 Answers6 active oldest votes active oldest votes active oldest votes Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don’t have package managers. Plus, there are systems where make install actually is the preferred way of managing packages. • I’m curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples) – Synxis Nov 23 at 14:13 • @Synxis BSD, Linux, Unix all use makefiles. Whether it’s preferred to use them for installation, I don’t know, but you often have that ability using make install. – Rob Nov 23 at 14:33 • In debian at least it’s preferred to use checkinstall over make install for two reasons: “You can easily remove the package with one step.” and “You can install the resulting package upon multiple machines.” – as checkinstall builds a .deb and installs it, it uses the package manager… – Aaron Hall Nov 23 at 17:01 • @Synxis – There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install – slebetman Nov 23 at 19:15 • @AaronHall Correct me if I’m wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it’s actions for package building. – cmaster Nov 24 at 10:26 Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don’t have package managers. Plus, there are systems where make install actually is the preferred way of managing packages. • I’m curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples) – Synxis Nov 23 at 14:13 • @Synxis BSD, Linux, Unix all use makefiles. Whether it’s preferred to use them for installation, I don’t know, but you often have that ability using make install. – Rob Nov 23 at 14:33 • In debian at least it’s preferred to use checkinstall over make install for two reasons: “You can easily remove the package with one step.” and “You can install the resulting package upon multiple machines.” – as checkinstall builds a .deb and installs it, it uses the package manager… – Aaron Hall Nov 23 at 17:01 • @Synxis – There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install – slebetman Nov 23 at 19:15 • @AaronHall Correct me if I’m wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it’s actions for package building. – cmaster Nov 24 at 10:26 Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don’t have package managers. Plus, there are systems where make install actually is the preferred way of managing packages. Many build scripts or Makefiles have an installation target because they were created before package managers existed, and because even today lots of systems don’t have package managers. Plus, there are systems where make install actually is the preferred way of managing packages. answered Nov 23 at 13:59 Jörg W Mittag 66.8k14138220 66.8k14138220 • I’m curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples) – Synxis Nov 23 at 14:13 • @Synxis BSD, Linux, Unix all use makefiles. Whether it’s preferred to use them for installation, I don’t know, but you often have that ability using make install. – Rob Nov 23 at 14:33 • In debian at least it’s preferred to use checkinstall over make install for two reasons: “You can easily remove the package with one step.” and “You can install the resulting package upon multiple machines.” – as checkinstall builds a .deb and installs it, it uses the package manager… – Aaron Hall Nov 23 at 17:01 • @Synxis – There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install – slebetman Nov 23 at 19:15 • @AaronHall Correct me if I’m wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it’s actions for package building. – cmaster Nov 24 at 10:26 • I’m curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples) – Synxis Nov 23 at 14:13 • @Synxis BSD, Linux, Unix all use makefiles. Whether it’s preferred to use them for installation, I don’t know, but you often have that ability using make install. – Rob Nov 23 at 14:33 • In debian at least it’s preferred to use checkinstall over make install for two reasons: “You can easily remove the package with one step.” and “You can install the resulting package upon multiple machines.” – as checkinstall builds a .deb and installs it, it uses the package manager… – Aaron Hall Nov 23 at 17:01 • @Synxis – There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install – slebetman Nov 23 at 19:15 • @AaronHall Correct me if I’m wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it’s actions for package building. – cmaster Nov 24 at 10:26 I’m curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples) – Synxis Nov 23 at 14:13 I’m curious about systems where make install is preferred. Apart from that, I meant program manager when I said that makefiles should create installable packages. I think almost all OS come with a way of managing the installed programs ? For example, Windows has no package manager (apart from the store) but still has a way to manage installed programs (via .msi packages for examples) – Synxis Nov 23 at 14:13 2 @Synxis BSD, Linux, Unix all use makefiles. Whether it’s preferred to use them for installation, I don’t know, but you often have that ability using make install. – Rob Nov 23 at 14:33 @Synxis BSD, Linux, Unix all use makefiles. Whether it’s preferred to use them for installation, I don’t know, but you often have that ability using make install. – Rob Nov 23 at 14:33 1 In debian at least it’s preferred to use checkinstall over make install for two reasons: “You can easily remove the package with one step.” and “You can install the resulting package upon multiple machines.” – as checkinstall builds a .deb and installs it, it uses the package manager… – Aaron Hall Nov 23 at 17:01 In debian at least it’s preferred to use checkinstall over make install for two reasons: “You can easily remove the package with one step.” and “You can install the resulting package upon multiple machines.” – as checkinstall builds a .deb and installs it, it uses the package manager… – Aaron Hall Nov 23 at 17:01 1 @Synxis – There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install – slebetman Nov 23 at 19:15 @Synxis – There are several linux distributions (often called source distros) where the package manager install programs by downloading a tar file, decompress it then run make install – slebetman Nov 23 at 19:15 1 @AaronHall Correct me if I’m wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it’s actions for package building. – cmaster Nov 24 at 10:26 @AaronHall Correct me if I’m wrong, but I got the impression that a checkinstall invocation will actually use make install and monitor it’s actions for package building. – cmaster Nov 24 at 10:26 A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s. However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc…. These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable. Try to use strings(1) on most executables of your system. You’ll find out which file paths are known to it. BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged. FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be “installed”; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable. • DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that’s of course a Linux-specific solution.
– amon
Nov 23 at 14:50

• I’ve seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
– Joshua
Nov 23 at 15:34

• @amon: I’m not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
– Kevin
Nov 23 at 16:25

• @Joshua It shouldn’t, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
– Nax
Nov 24 at 14:27

A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s.

However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc…. These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable.

Try to use strings(1) on most executables of your system. You’ll find out which file paths are known to it.

BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged.

FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be “installed”; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable.

• DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that’s of course a Linux-specific solution. – amon Nov 23 at 14:50 • I’ve seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation. – Joshua Nov 23 at 15:34 • @amon: I’m not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems. – Kevin Nov 23 at 16:25 • @Joshua It shouldn’t, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue. – Nax Nov 24 at 14:27 A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s. However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc…. These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable. Try to use strings(1) on most executables of your system. You’ll find out which file paths are known to it. BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged. FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be “installed”; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable. A makefile might have no install target, and more importantly, you can have programs which are not even supposed to be installable (e.g. because they should run from their build directory, or because they can run installed anywhere). The install target is just a convention for usual makefile-s. However, many programs require external resources to be run (for example: fonts, databases, configuration files, etc). And their executable often make some hypothesis about these resources. For example, your bash shell would generally read some initialization file from /etc/bash.bashrc etc…. These resources are generally in the file system (see hier(7) for conventions about the file hierarchy) and the default file path is built in your executable. Try to use strings(1) on most executables of your system. You’ll find out which file paths are known to it. BTW, for many GNU programs using autoconf, you could run make install DESTDIR=/tmp/destdir/ without being root. Then /tmp/destdir/ is filled with the files that should be later packaged. FWIW, I tend to believe that my bismon (GPLv3+ licensed) program (described in my bismon-chariot-doc.pdf report) cannot be “installed”; I am not sure to be able to prove that, and I cannot imagine how could I make that program installable. edited Nov 23 at 16:35 answered Nov 23 at 14:27 Basile Starynkevitch 27.1k56098 27.1k56098 • DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that’s of course a Linux-specific solution.
– amon
Nov 23 at 14:50

• I’ve seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation.
– Joshua
Nov 23 at 15:34

• @amon: I’m not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems.
– Kevin
Nov 23 at 16:25

• @Joshua It shouldn’t, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue.
– Nax
Nov 24 at 14:27

• DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that’s of course a Linux-specific solution. – amon Nov 23 at 14:50 • I’ve seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation. – Joshua Nov 23 at 15:34 • @amon: I’m not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems. – Kevin Nov 23 at 16:25 • @Joshua It shouldn’t, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue. – Nax Nov 24 at 14:27 2 DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that’s of course a Linux-specific solution.
– amon
Nov 23 at 14:50

DESTDIR or other prefixes are too often forgotten. As soon as external resources such as dynamic libraries are involved, it is not possible to build the software without knowing where it will be installed. Also great for installing to non-standard locations, e.g /opt or into $HOME. The only way to avoid different prefixes is to use containers, but that’s of course a Linux-specific solution. – amon Nov 23 at 14:50 2 I’ve seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation. – Joshua Nov 23 at 15:34 I’ve seen more than one package that if you tried DESTDIR=/tmp/destdir would not work later when installed to the normal place because DESTDIR was used in path generation. – Joshua Nov 23 at 15:34 @amon: I’m not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems. – Kevin Nov 23 at 16:25 @amon: I’m not sure I would characterize containers as Linux-specific. Linux may be a common target platform for containerization, but some form of container technology exists in most modern operating systems. – Kevin Nov 23 at 16:25 1 @Joshua It shouldn’t, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue. – Nax Nov 24 at 14:27 @Joshua It shouldn’t, DESTDIR should only be relevant during the install step. You should be able to do: ./configure --prefix="/opt/foo" && make && DESTDIR=/tmp/foo make install and be able to relocate the package to /opt/foo without any issue. – Nax Nov 24 at 14:27 There are several reasons which come to mind. • Many package creating software – the Debian build system for example, and IIRC rpm as well – already expect from the building script to “install” the program to some special subdirectory. So it is driven by backward compatibility in both directions. • A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it.
• There may still be environments which do not have packages.

• I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages.
– Synxis
Nov 23 at 14:15

There are several reasons which come to mind.

• Many package creating software – the Debian build system for example, and IIRC rpm as well – already expect from the building script to “install” the program to some special subdirectory. So it is driven by backward compatibility in both directions.
• A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it. • There may still be environments which do not have packages. • I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages. – Synxis Nov 23 at 14:15 There are several reasons which come to mind. • Many package creating software – the Debian build system for example, and IIRC rpm as well – already expect from the building script to “install” the program to some special subdirectory. So it is driven by backward compatibility in both directions. • A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it.
• There may still be environments which do not have packages.

There are several reasons which come to mind.

• Many package creating software – the Debian build system for example, and IIRC rpm as well – already expect from the building script to “install” the program to some special subdirectory. So it is driven by backward compatibility in both directions.
• A user may want to install the software to a local space, like in the $HOME directory. Not all package managers support it. • There may still be environments which do not have packages. edited Nov 23 at 17:41 Peter Mortensen 1,11621114 1,11621114 answered Nov 23 at 14:07 max630 1,120411 1,120411 • I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages. – Synxis Nov 23 at 14:15 • I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages. – Synxis Nov 23 at 14:15 I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages. – Synxis Nov 23 at 14:15 I reworded the question a bit, I meant program manager when I said that makefiles should create installable packages. – Synxis Nov 23 at 14:15 One reason not mentioned is there’s a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using. Let’s say you’re using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don’t want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let’s you install the modified 1.3.0 software without having to worry about creating a package and install it on your system. One reason not mentioned is there’s a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using. Let’s say you’re using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don’t want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let’s you install the modified 1.3.0 software without having to worry about creating a package and install it on your system. One reason not mentioned is there’s a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using. Let’s say you’re using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don’t want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let’s you install the modified 1.3.0 software without having to worry about creating a package and install it on your system. One reason not mentioned is there’s a lot of times when you are not using the current version of the software or using a modified version of the software. Trying to create a custom package is not only more work, but it can conflict with currently created and distributed packages. In open source code this happens a lot especially if breaking changes are introduced in future versions you are using. Let’s say you’re using the open source project FOO which is currently on version 2.0.1 and you are using version 1.3.0. You don’t want to use anything above that because version 2.0.0 is incompatible with what you are currently doing, but there is a single bug fix in 2.0.1 you desperately need. Having the make install option let’s you install the modified 1.3.0 software without having to worry about creating a package and install it on your system. answered Nov 23 at 16:44 Dom 1696 1696 Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance. This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs. To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience. This is one of the “top of the food chain” problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others. In the case of software installation management, the package manager is this component, and it will run the package’s build system, then take the output through a convenient interface (“files in a directory after an installation step”), generate a package and prepare it for upload to a repository. The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both. You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages. With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work. • You’ve said nothing about why there’s an install target, and it seems to me that most of what you’ve written would apply to it too… – curiousdannii Nov 24 at 11:45 • @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won. – Simon Richter Nov 25 at 13:06 Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance. This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs. To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience. This is one of the “top of the food chain” problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others. In the case of software installation management, the package manager is this component, and it will run the package’s build system, then take the output through a convenient interface (“files in a directory after an installation step”), generate a package and prepare it for upload to a repository. The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both. You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages. With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work. • You’ve said nothing about why there’s an install target, and it seems to me that most of what you’ve written would apply to it too… – curiousdannii Nov 24 at 11:45 • @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won. – Simon Richter Nov 25 at 13:06 Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance. This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs. To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience. This is one of the “top of the food chain” problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others. In the case of software installation management, the package manager is this component, and it will run the package’s build system, then take the output through a convenient interface (“files in a directory after an installation step”), generate a package and prepare it for upload to a repository. The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both. You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages. With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work. Linux distributions generally separate program maintenance from package maintenance. A build system that integrates package generation would force program maintainers to also perform package maintenance. This is usually a bad idea. Distributions have lots of infrastructure to verify internal consistency, provide binaries for multiple target platforms, perform small alterations to better integrate with the rest of the system and provide a consistent experience for users reporting bugs. To generate packages directly from a build system, you would have to either integrate or bypass all of this infrastructure. Integrating it would be a lot of work for questionable benefit, and bypassing it would give a worse user experience. This is one of the “top of the food chain” problems that are typical in multi-party systems. If you have multiple complex systems, there needs to be a clear hierarchy of which system is responsible for coordinating all others. In the case of software installation management, the package manager is this component, and it will run the package’s build system, then take the output through a convenient interface (“files in a directory after an installation step”), generate a package and prepare it for upload to a repository. The package manager stands in the middle between the build system and the repository here, and is in the best position to integrate well with both. You may have noticed that there are only few of the JavaScript packages available through npm also available through apt — this is mainly because the JavaScript people decided that npm and the associated repository was going to be the top of their food chain, which made it close to impossible to ship these packages as Debian packages. With my Debian Developer hat on: if you release open source software, please leave the packaging to distribution maintainers. It saves both you and us a lot of work. answered Nov 23 at 17:52 Simon Richter 1,17569 1,17569 • You’ve said nothing about why there’s an install target, and it seems to me that most of what you’ve written would apply to it too… – curiousdannii Nov 24 at 11:45 • @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won. – Simon Richter Nov 25 at 13:06 • You’ve said nothing about why there’s an install target, and it seems to me that most of what you’ve written would apply to it too… – curiousdannii Nov 24 at 11:45 • @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won. – Simon Richter Nov 25 at 13:06 You’ve said nothing about why there’s an install target, and it seems to me that most of what you’ve written would apply to it too… – curiousdannii Nov 24 at 11:45 You’ve said nothing about why there’s an install target, and it seems to me that most of what you’ve written would apply to it too… – curiousdannii Nov 24 at 11:45 1 @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won. – Simon Richter Nov 25 at 13:06 @curiousdannii, there needs to be some interface between build system and package manager, and this happens to be the simplest one, so it won. – Simon Richter Nov 25 at 13:06 Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It’s better for the application developer to write a script to properly install the application he’s written. He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it’s become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="$pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy.

New contributor
JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It’s better for the application developer to write a script to properly install the application he’s written.

He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it’s become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="$pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy. New contributor JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct. Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It’s better for the application developer to write a script to properly install the application he’s written. He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it’s become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="$pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy.

New contributor
JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.

Well, application developers are the ones that know where each file should go. They could leave that in documentation, and have package maintainers read that and build a script for each package. Maybe the package maintainers will misinterpret the documentation and will have to debug the script until it works. This is inefficient. It’s better for the application developer to write a script to properly install the application he’s written.

He could write an install script with an arbitrary name or maybe make it part of the procedure of some other script. However, having a standard install command, make install (a convention that predates package managers), it’s become really easy to make packages. If you look at the PKGBUILD template for making Archlinux packages, you can see that the function that actually packages simply does a make DESTDIR="pkgdir/" install. This probably works for the majority of packages and probably more with a little modification. Thanks to make (and the autotools) being standard, packaging is really, really easy. New contributor JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct. New contributor JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct. answered Nov 24 at 2:22 JoL 1192 1192 New contributor JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct. New contributor JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct. JoL is a new contributor to this site. Take care in asking for clarification, commenting, and answering. Check out our Code of Conduct. Thanks for contributing an answer to Software Engineering Stack Exchange! • Please be sure to answer the question. Provide details and share your research! But avoid • Asking for help, clarification, or responding to other answers. • Making statements based on opinion; back them up with references or personal experience. To learn more, see our tips on writing great answers. Some of your past answers have not been well-received, and you’re in danger of being blocked from answering. Please pay close attention to the following guidance: • Please be sure to answer the question. Provide details and share your research! But avoid • Asking for help, clarification, or responding to other answers. • Making statements based on opinion; back them up with references or personal experience. To learn more, see our tips on writing great answers. draft saved draft discarded StackExchange.ready( function () { StackExchange.openid.initPostLogin(‘.new-post-login’, ‘https%3a%2f%2fsoftwareengineering.stackexchange.com%2fquestions%2f381924%2fwhy-should-makefiles-have-an-install-target%23new-answer’, ‘question_page’); } ); ### Post as a guest Required, but never shown Required, but never shown Required, but never shown Required, but never shown Required, but never shown Required, but never shown Required, but never shown Required, but never shown Required, but never shown ## Isfahan City in Isfahan, Iran Isfahan اصفهان City Ancient names: Spahān, Aspadana Persian transcription(s)  Seal Nickname(s): Nesf-e Jahān (Half of the world) Isfahan Isfahan Isfahan in Iran Coordinates: 32°38′41″N 51°40′03″E﻿ / ﻿32.64472°N 51.66750°E﻿ / 32.64472; 51.66750Coordinates: 32°38′41″N 51°40′03″E﻿ / ﻿32.64472°N 51.66750°E﻿ / 32.64472; 51.66750 Country Iran Province Isfahan County Isfahan District Central Government • Mayor Ghodratollah Norouzi • City Council Chairperson Fathollah Moein Area [1] • Urban 551 km2 (213 sq mi) Elevation 1,574 m (5,217 ft) Population (2016 Census) • Urban 1,961,260[2] • Metro 3,989,070[3] • Population Rank in Iran 3rd Time zone UTC+3:30 (IRST) • Summer (DST) UTC+4:30 (IRDT 21 March – 20 September) Area code(s) 031 Climate BWk Website www.isfahan.ir Isfahan (historically also rendered in English as Ispahan, Sepahan, Esfahan or Hispahan) (Persian: اصفهان‎, translit. Esfahān [esfæˈhɒːn] (listen)) is a city in Iran. It is located 406 kilometres (252 miles) south of Tehran, and is the capital of Isfahan Province. Isfahan has a population of approximately 1.6 million,[4] making it the third largest city in Iran after Tehran and Mashhad. Isfahan is an important city as it is located at the intersection of the two principal north–south and east–west routes that traverse Iran. It was once one of the largest cities in the world. It flourished from 1050 to 1722, particularly in the 16th and 17th centuries under the Safavid dynasty when it became the capital of Persia for the second time in its history. Even today the city retains much of its past glory. It is famous for its Persian–Islamic architecture, having many beautiful boulevards, covered bridges, palaces, mosques, and minarets, and the city also has many historical buildings, monuments, paintings and artefacts. The fame of Isfahan led to the Persian pun and proverb “Esfahān nesf-e- jahān ast“: Isfahan is half (of) the world.[5] The Naghsh-e Jahan Square in Isfahan is one of the largest city squares in the world. UNESCO has designated it a World Heritage Site. ## Contents • 1 History • 1.1 Etymology • 1.2 Prehistory • 1.3 Zoroastrian era • 1.4 Islamic era • 1.5 Modern age • 2 Geography and climate • 2.1 Air pollution • 2.2 Bazaars • 2.3 Bridges • 2.4 Churches and cathedrals • 2.5 Emamzadehs • 2.6 Gardens and parks • 2.7 Houses • 2.8 Mausoleums and tombs • 2.9 Minarets • 2.10 Mosques • 2.11 Museums • 2.12 Schools (madresse) • 2.13 Palaces and caravanserais • 2.14 Squares and streets • 2.15 Synagogues • 2.16 Tourist attractions • 2.17 Other sites • 3 Education • 4 Transportation • 4.1 Roads • 4.2 Metro • 5 Culture • 6 Notable people • 7 Sports • 8 Municipal government • 9 Twin towns – sister cities • 10 See also • 11 References • 12 Sources • 13 External links ## History ### Etymology See also: Names of Isfahan [fa] “Isfahan” is derived from Middle Persian Spahān. Spahān is attested in various Middle Persian seals and inscriptions, including that of Zoroastrian Magi Kartir,[6] and is also the Armenian name of the city (Սպահան). The present-day name is the Arabicized form of Ispahan (unlike Middle Persian, and similar to Spanish, New Persian does not allow initial consonant clusters such as sp[7]). The region appears with the abbreviation GD (Southern Media) on Sasanian numismatics. In Ptolemy’s Geographia it appears as Aspadana, translating to “place of gathering for the army”. It is believed that Spahān derives from spādānām “the armies”, Old Persian plural of spāda (from which derives spāh ‘army’ and spahi (soldier – lit. of the army) in Middle Persian). ### Prehistory Human habitation of the Isfahan region can be traced back to the Palaeolithic period. Recent discoveries archaeologists have found artifacts dating back to the Palaeolithic, Mesolithic, Neolithic, Bronze and Iron ages. ### Zoroastrian era What was to become the city of Isfahan in later historical periods probably emerged as a locality and settlement that gradually developed over the course of the Elamite civilisation (2700–1600 BCE). Under Median rule, this commercial entrepôt began to show signs of a more sedentary urbanism, steadily growing into a noteworthy regional centre that benefited from the exceptionally fertile soil on the banks of the Zayandehrud River in a region called Aspandana or Ispandana. An ancient artifact from Isfahan City Center museum Once Cyrus the Great (reg. 559–529 BCE) had unified Persian and Median lands into the Achaemenid Empire (648–330 BCE), the religiously and ethnically diverse city of Isfahan became an early example of the king’s fabled religious tolerance. It was Cyrus who, having just taken Babylon, made an edict in 538 BCE, declaring that the Jews in Babylon could return to Jerusalem (see Ezra ch. 1). Now it seems that some of these freed Jews settled in Isfahan instead of returning to their homeland. The 10th-century Persian historian Ibn al-Faqih wrote: .mw-parser-output .templatequote{overflow:hidden;margin:1em 0;padding:0 40px}.mw-parser-output .templatequote .templatequotecite{line-height:1.5em;text-align:left;padding-left:1.6em;margin-top:0} “When the Jews emigrated from Jerusalem, fleeing from Nebuchadnezzar, they carried with them a sample of the water and soil of Jerusalem. They did not settle down anywhere or in any city without examining the water and the soil of each place. They did all along until they reached the city of Isfahan. There they rested, examined the water and soil and found that both resembled Jerusalem. Thereupon they settled there, cultivated the soil, raised children and grandchildren, and today the name of this settlement is Yahudia.”[8] The Parthians in the period 250–226 BCE continued the tradition of tolerance after the fall of the Achaemenids, fostering the Hellenistic dimension within Iranian culture and the political organisation introduced by Alexander the Great’s invading armies. Under the Parthians, Arsacid governors administered the provinces of the nation from Isfahan, and the city’s urban development accelerated to accommodate the needs of a capital city. Isfahan at the end of the 6th century (top), consisting of two separate areas of Sassanid Jay and Jewish Yahudia. At 11th century (bottom), these two areas are completely merged. The next empire to rule Persia, the Sassanids (226–652 CE), presided over massive changes in their realm, instituting sweeping agricultural reform and reviving Iranian culture and the Zoroastrian religion. Both the city and region were then called by the name Aspahan or Spahan. The city was governed by a group called the Espoohrans, who came from seven noble and important Iranian royal families. Extant foundations of some Sassanid-era bridges in Isfahan suggest that the Sasanian kings were fond of ambitious urban planning projects. While Isfahan’s political importance declined during the period, many Sassanid princes would study statecraft in the city, and its military role developed rapidly. Its strategic location at the intersection of the ancient roads to Susa and Persepolis made it an ideal candidate to house a standing army, ready to march against Constantinople at any moment. The words ‘Aspahan’ and ‘Spahan’ are derived from the Pahlavi or Middle Persian meaning ‘the place of the army’.[9] Although many theories have been mentioned about the origin of Isfahan, in fact little is known of it before the rule of the Sasanian dynasty (c. 224 – c. 651 CE). The historical facts suggest that in the late 4th and early 5th centuries, Queen Shushandukht, the Jewish consort of Yazdegerd I (reigned 399–420) settled a colony of Jews in Yahudiyyeh (also spelled Yahudiya), a settlement 3 km northwest of the Zoroastrian city of Gabae (its Achaemid and Parthian name; Gabai was its Sasanic name, which was shortened to Gay (Arabic ‘Jay’) that was located on the northern bank of the Zayanderud River. The gradual population decrease of Gay (Jay) and the simultaneous population increase of Yahudiyyeh and its suburbs after the Islamic conquest of Iran resulted in the formation of the nucleus of what was to become the city of Isfahan. The words “Aspadana”, “Ispadana”, “Spahan” and “Sepahan”, all from which the word Isfahan is derived, referred to the region in which the city was located. Isfahan and Gay were both circular in design, a characteristic of Parthian and Sasanian cities.[10] ### Islamic era Isfahan, capital of the Kingdom of Persia Isfahan to the south side, drawing by Eugène Flandin Russian army in Isfahan in the 1890s Mobarakeh Steel Company, one of the largest steel companies in the region When the Arabs captured Isfahan in 642, they made it the capital of al-Jibal (“the Mountains”) province, an area that covered much of ancient Media. Isfahan grew prosperous under the Persian Buyid (Buwayhid) dynasty, which rose to power and ruled much of Iran when the temporal authority of the Abbasid caliphs waned in the 10th century. The Turkish conqueror and founder of the Seljuq dynasty, Toghril Beg, made Isfahan the capital of his domains in the mid-11th century; but it was under his grandson Malik-Shah I (r. 1073–92) that the city grew in size and splendour.[11] After the fall of the Seljuqs (c. 1200), Isfahan temporarily declined and was eclipsed by other Iranian cities such as Tabriz and Qazvin. During his visit in 1327, Ibn Battuta noted that “The city of Isfahan is one of the largest and fairest of cities, but it is now in ruins for the greater part.”[12] It regained its importance during the Safavid period (1501–1736). The city’s golden age began in 1598 when the Safavid ruler Shah Abbas I (reigned 1588–1629) made it his capital and rebuilt it into one of the largest and most beautiful cities in the 17th century world. In 1598 Shah Abbas the Great moved his capital from Qazvin to the more central Isfahan; he name it Ispahān (New Persian) so that it wouldn’t be threatened by the Ottomans. This new status ushered in a golden age for the city, with architecture and Persian culture flourishing. In the 16th and 17th centuries, thousands of deportees and migrants from the Caucasus, that Abbas and other Safavid rulers had permitted to emigrate en masse, settled in the city. So now the city had enclaves of Georgian, Circassian, and Daghistani descent.[13] Engelbert Kaempfer, who dwelt in Safavid Persia in 1684–85, estimated their number at 20,000.[13][14] During the Safavid era, the city contained a very large Armenian community as well. As part of Abbas’s forced resettlement of peoples from within his empire, he resettled as many as 300,000 Armenians[15][16]) from near the unstable Safavid-Ottoman border, primarily from the very wealthy Armenian town of Jugha (also known as Old Julfa) in mainland Iran.[16] In Isfahan, he ordered the foundation of a new quarter for these resettled Armenians from Old Julfa, and thus the Armenian Quarter of Isfahan was named New Julfa.[15][16] Today, the New Jolfa district of Isfahan remains a heavily Armenian-populated district, with Armenian churches and shops, the Vank Cathedral being especially notable for its combination of Armenian Christian and Iranian Islamic elements. It is still one of the oldest and largest Armenian quarters in the world. Following an agreement between Shah Abbas I and his Georgian subject Teimuraz I of Kakheti (“Tahmuras Khan”), whereby the latter submitted to Safavid rule in exchange for being allowed to rule as the region’s wāli (governor) and for having his son serve as dāruḡa (“prefect”) of Isfahan in perpetuity, the Georgian prince converted to Islam and served as governor.[13] He was accompanied by a troop of soldiers,[13] some of whom were Georgian Orthodox Christians.[13] The royal court in Isfahan had a great number of Georgian ḡolāms (military slaves), as well as Georgian women.[13] Although they spoke both Persian and Turkic, their mother tongue was Georgian.[13] During Abbas’s reign, Isfahan became very famous in Europe, and many European travellers made an account of their visit to the city, such as Jean Chardin. This prosperity lasted until it was sacked by Afghan invaders in 1722 during a marked decline in Safavid influence. Thereafter, Isfahan experienced a decline in importance, culminating in a move of the capital to Mashhad and Shiraz during the Afsharid and Zand periods respectively, until it was finally moved to Tehran in 1775 by Agha Mohammad Khan, the founder of the Qajar dynasty. (See https://www.britannica.com/place/Tehran) In the early years of the 19th century, efforts were made to preserve some of Ifsahan’s archeologically important buildings. The work was started by Mohammad Hossein Khan during the reign of Fath Ali Shah.[17] ### Modern age In the 20th century, Isfahan was resettled by a very large number of people from southern Iran, firstly during the population migrations at the start of the century, and again in the 1980s following the Iran–Iraq War. Today, Isfahan produces fine carpets, textiles, steel, handicrafts, and traditional foods including sweets. There are nuclear experimental reactors as well as facilities for producing nuclear fuel (UCF) within the environs of the city. Isfahan has one of the largest steel-producing facilities in the region, as well as facilities for producing special alloys. Mobarakeh Steel Company is the biggest steel producer in the whole of the Middle East and Northern Africa, and it is the biggest DRI producer in the world.[18] The Isfahan Steel Company was the first manufacturer of constructional steel products in Iran, and it remains the largest such company today.[19] The city has an international airport and a metro line. There are a major oil refinery and a large airforce base outside the city. HESA, Iran’s most advanced aircraft manufacturing plant, is located just outside the city.[20][21] Isfahan is also attracting international investment,[22] especially in the Isfahan City Center[23] which is the largest shopping mall in Iran and the fifth largest in the world.[24] Isfahan hosted the International Physics Olympiad in 2007. ## Geography and climate The city is located in the lush plain of the Zayanderud River at the foothills of the Zagros mountain range. The nearest mountain is Mount Soffeh (Kuh-e Soffeh), just south of the city. No geological obstacles exist within 90 kilometres (56 miles) north of Isfahan, allowing cool winds to blow from this direction. Situated at 1,590 metres (5,217 ft) above sea level on the eastern side of the Zagros Mountains, Isfahan has an arid climate (Köppen BWk). Despite its altitude, Isfahan remains hot during the summer, with maxima typically around 35 °C (95 °F). However, with low humidity and moderate temperatures at night, the climate is quite pleasant. During the winter, days are mild while nights can be very cold. Snow has occurred at least once every winter except 1986/1987 and 1989/1990.[25] The Zayande River starts in the Zagros Mountains, flowing from the west through the heart of the city, then dissipates in the Gavkhooni wetland. Climate data for Isfahan (1961–1990, extremes 1951–2010) Month Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec Year Record high °C (°F) 20.4 (68.7) 23.4 (74.1) 29.0 (84.2) 32.0 (89.6) 37.6 (99.7) 41.0 (105.8) 43.0 (109.4) 42.0 (107.6) 39.0 (102.2) 33.2 (91.8) 26.8 (80.2) 21.2 (70.2) 43.0 (109.4) Average high °C (°F) 8.8 (47.8) 11.9 (53.4) 16.8 (62.2) 22.0 (71.6) 28.0 (82.4) 34.1 (93.4) 36.4 (97.5) 35.1 (95.2) 31.2 (88.2) 24.4 (75.9) 16.9 (62.4) 10.8 (51.4) 23.0 (73.4) Daily mean °C (°F) 2.7 (36.9) 5.5 (41.9) 10.4 (50.7) 15.7 (60.3) 21.3 (70.3) 27.1 (80.8) 29.4 (84.9) 27.9 (82.2) 23.5 (74.3) 16.9 (62.4) 9.9 (49.8) 4.4 (39.9) 16.2 (61.2) Average low °C (°F) −2.4 (27.7) −0.2 (31.6) 4.5 (40.1) 9.4 (48.9) 14.2 (57.6) 19.1 (66.4) 21.5 (70.7) 19.8 (67.6) 15.1 (59.2) 9.3 (48.7) 3.6 (38.5) −0.9 (30.4) 9.4 (48.9) Record low °C (°F) −19.4 (−2.9) −12.2 (10) −8 (18) −4 (25) 4.5 (40.1) 10.0 (50) 13.0 (55.4) 11.0 (51.8) 5.0 (41) 0.0 (32) −8 (18) −13 (9) −19.4 (−2.9) Average precipitation mm (inches) 17.1 (0.673) 14.1 (0.555) 18.2 (0.717) 19.2 (0.756) 8.8 (0.346) 0.6 (0.024) 0.7 (0.028) 0.2 (0.008) 0.0 (0) 4.1 (0.161) 9.9 (0.39) 19.6 (0.772) 112.5 (4.429) Average precipitation days (≥ 1.0 mm) 4.0 2.9 3.8 3.5 2.0 0.2 0.3 0.1 0.0 0.8 2.2 3.7 23.5 Average snowy days 3.2 1.7 0.7 0.1 0.0 0.0 0.0 0.0 0.0 0.0 0.2 1.9 7.8 Average relative humidity (%) 60 51 43 39 33 23 23 24 26 36 48 57 39 Mean monthly sunshine hours 205.3 213.3 242.1 244.5 301.3 345.4 347.6 331.2 311.6 276.5 226.1 207.6 3,252.5 Source #1: NOAA[26] Source #2: Iran Meteorological Organization (records)[27][28] ### Air pollution Air pollution is one of the major environmental issues in Isfahan. Due to an increase in the number of cars in the city, thermal power plants, petrochemical complexes and the oil refinery in the west of the city, air pollution levels have increased markedly in the second half of the 20th century. With the introduction of national environment levels for heavy industry, industrial pollution has been reduced in recent years. However, the air quality in the city is far below world norms. Indeed, Isfahan has the highest air pollution index of all the major cities in Iran. This is thought to be partly due to its climate and geography.[29] Main places A handicraft shop A handicraft from Isfahan Shah Mosque. Painting by the French architect, Pascal Coste, visiting Persia in 1841 . Si-o-se Pol Naghsh-e-Jahan Square, Isfahan, Iran View of Ali Qapu Palace A carpet shop in Grand Bazaar, Isfahan Khaju Bridge Detail of Khaju Bridge Isfahan aquarium Armenian Vank Cathedral The city centre consists of an older section revolving around the Jameh Mosque, and the Safavid expansion around Naqsh-e Jahan Square, with nearby places of worship, palaces, and bazaars.[30] ### Bazaars • Shahi Bazaar – 17th century • Qeysarie Bazaar – 17th century ### Bridges Persian pottery from the city Isfahan, 17th century The bridges on the Zayanderud river comprise some of the finest architecture in Isfahan. The oldest bridge is the Shahrestan bridge, whose foundations were built by the Sasanian Empire (3rd–7th century Sassanid era); it was repaired during the Seljuk period. Further upstream is the Khaju bridge, which was built by Shah Abbas II in 1650. It is 123 metres (404 feet) long with 24 arches, and also serves as a sluice gate. Another bridge is the Choobi (Joui) bridge, which was originally an aqueduct to supply the palace gardens on the north bank of the river. Further upstream again is the Si-o-Seh Pol or bridge of 33 arches. Built during the reign of Shah Abbas the Great, it linked Isfahan with the Armenian suburb of New Julfa. It is by far the longest bridge in Isfahan at 295 m (967.85 ft). Another notable bridge is the Marnan Bridge. ### Churches and cathedrals • Bedkhem Church – 1627 • St. Georg Church – 17th century • St. Jakob Church – 1607 • St. Mary Church – 17th century • Vank Cathedral – 1664 ### Emamzadehs • Emamzadeh Ahmad • Emamzadeh Esmaeil, Isfahan • Emamzadeh Haroun-e-Velayat – 16th century • Emamzadeh Jafar • Emamzadeh Shah Zeyd ### Gardens and parks • Birds Garden • Flower Garden • Nazhvan Recreational Complex ### Houses • Alam’s House • Amin’s House • Malek Vineyard • Qazvinis’ House – 19th century • Sheykh ol-Eslam’s House ### Mausoleums and tombs • Al-Rashid Mausoleum – 12th century • Baba Ghassem Mausoleum – 14th century • Mausoleum of Safavid Princes • Nizam al-Mulk Tomb – 11th century • Saeb Mausoleum • Shahshahan mausoleum – 15th century • Soltan Bakht Agha Mausoleum – 14th century ### Minarets • Ali minaret – 11th century • Bagh-e-Ghoushkhane minaret – 14th century • Chehel Dokhtaran minaret – 12 century • Dardasht minarets – 14th century • Darozziafe minarets – 14th century • Menar Jonban – 14th century • Sarban minaret ### Mosques • Agha Nour mosque – 16th century • Hakim Mosque • Ilchi mosque • Jameh Mosque[31] • Jarchi mosque – 1610 • Lonban mosque • Maghsoudbeyk mosque – 1601 • Mohammad Jafar Abadei mosque – 1878 • Rahim Khan mosque – 19th century • Roknolmolk mosque • Seyyed mosque – 19th century • Shah Mosque – 1629 • Sheikh Lotf Allah Mosque – 1618 ### Museums • Contemporary Arts Museum Isfahan • Isfahan City Center Museum • Museum of Decorative Arts • Natural History Museum of Isfahan – 15th century ### Schools (madresse) • Chahar Bagh School – early 17th century • Harati • Kassegaran school – 1694 • Madreseye Khajoo • Nimavar school – 1691 • Sadr school – 19th century ### Palaces and caravanserais • Ali Qapu (The Royal Palace) – early 17th century • Chehel Sotoun (The Palace of Forty Columns) – 1647 • Hasht-Behesht (The Palace of Eight Paradises) – 1669 • Shah Caravanserai • Talar Ashraf (The Palace of Ashraf) – 1650 ### Squares and streets A view of Meydan Kohne • Chaharbagh Boulevard – 1596 • Chaharbagh-e-khajou Boulevard • Meydan Kohne (Old Square) • Naqsh-e Jahan Square also known as “Shah Square” or “Imam Square” – 1602 ### Synagogues • Kenisa-ye Bozorg (Mirakhor’s kenisa) • Kenisa-ye Molla Rabbi • Kenisa-ye Sang-bast • Mullah Jacob Synagogue • Mullah Neissan Synagogue • Kenisa-ye Keter David ### Tourist attractions The central historical area in Isfahan is called Seeosepol (the name of a famous bridge).[32][33] ### Other sites • Atashgah – a Zoroastrian fire temple • The Bathhouse of Bahāʾ al-dīn al-ʿĀmilī • Isfahan City Center • Jarchi hammam • New Julfa (The Armenian Quarter) – 1606 • Pigeon Towers[34] – 17th century • Takht-e Foulad ## Education Central Municipal Library of Esfahan Front Facade of the Central Municipal Library of Esfahan Aside from the seminaries and religious schools, the major universities of the Esfahan metropolitan area are: • Universities • Isfahan University of Art • Isfahan University of Medical Sciences • Isfahan University of Technology • Islamic Azad University of Isfahan • Islamic Azad University of Najafabad • Islamic Azad University of Falavarjan • Islamic Azad University of Majlesi • University of Isfahan • High schools • Adab High School • Farzanegan e Amin High School • Harati High School • Imam Mohammad Bagher Education Complex • Imam Sadegh Education Complex • Mahboobeh Danesh (Navaie) • Pooya High School • Saadi High School • Sa’eb Education Complex • Salamat High School • Saremiyh High School • Shahid Ejei High School • Saeb High School There are also more than 50 technical and vocational training centres in the province under the administration of Esfahan TVTO, which provide free, non-formal training programs.[35] ## Transportation Old building of Isfahan city hall ### Roads Over the past decade, Isfahan’s internal highway network has been undergoing major expansion. Much care has been taken to prevent damage to valuable, historical buildings. Modern freeways connect the city to the country’s major cities, including the capital Tehran (length approximately 400 km) to the north and Shiraz (200 km) to the south. Highways also service satellite cities surrounding the metropolitan area.[36] ### Metro A line of metro that runs for 11 km from north to south opened on October 15, 2015. Two more lines are in construction, alongside with three suburban rail lines.[37] ## Culture An old master of hand-printed carpets in Isfahan bazaar The Damask rose ‘Ispahan’, reputedly developed in Ispahan ## Notable people Mohammad-Ali Jamalzadeh Houshang Golshiri Mohammad Beheshti Mohammad-Baqer Majlesi Mohammad Ali Foroughi Mohammad Javad Zarif Mahmoud Farshchian Mohammad Esfahani Music • Jalal Taj Eesfahani (1903-1981), musician, singer and vocalist[38] • Mohammad Esfahani (1966– ), singer and songwriter[39] • Alireza Eftekhari (1956– ), singer[40] • Fard • Leila Forouhar, pop singer[41] • Hassan Kassai (1928-2012), musician[42] • Nasrollah Moein (1951– ), pop singer[43] • Hesameddin Seraj, musician, singer and vocalist[44] • Hassan Shamaizadeh, songwriter and singer[45] • Jalil Shahnaz (1921-2013), tar soloist, a traditional Persian instrument[46] Film • Rasul Sadr Ameli (1953–), director • Reza Arhamsadr (1924–2008), actor • Sara Bahrami (1983-), actor[47] • Homayoun Ershadi (1947–), Hollywood actor and architect • Soraya Esfandiary-Bakhtiari (1956–2001), former princess of Iran and actress • Asghar Farhadi (1972– ), Oscar-winning director[48] • Bahman Farmanara (1942–), director • Jahangir Forouhar (1916–1997), actor and father of Leila Forouhar (Iranian singer) • Mohamad Ali Keshvarz (1930-), actor[49] • Nosratollah Vahdat (1925-), actor • Mahdi Pakdel (1980-), actor[50] • Kiumars Poorahmad (1949–), director[51] • Soroush Sehhat (1965–), actor and director[52] Craftsmen and painters • Reza Badrossama (1949–), painter and miniaturist[53] • Mahmoud Dehnavi (1927–), craftsman and artist[54] • Mahmoud Farshchian (1930–), painter and miniaturist[55] • Freydoon Rassouli (1943–), American painter born and raised in Isfahan[56] • Bogdan Saltanov (1630s–1703), Russian icon painter of Isfahanian Armenian origin Political figures • Ahmad Amir-Ahmadi (1906–1965), military leader and cabinet minister • Ayatollah Mohammad Beheshti (1928–1981), cleric, Chairman of the Council of Revolution of Iran[57] • Nusrat Bhutto, Chairman of Pakistan Peoples Party from 1979–1983; wife of Zulfikar Ali Bhutto; mother of Benazir Bhutto • Hossein Fatemi, PhD (1919–1954), politician; foreign minister in Mohamed Mossadegh’s cabinet • Mohammad-Ali Foroughi, a politician and Prime Minister of Iran in the World War II era • Dariush Forouhar (August 1928 – November 1998), a founder and leader of the Hezb-e Mellat-e Iran (Nation of Iran Party) • Hossein Kharrazi, chief of the army in the Iran–Iraq war[58] • Mohsen Nourbakhsh (1948–2003), economist, Governor of the Central Bank of Iran • Mohammad Javad Zarif (1960–), Minister of Foreign Affairs and former Ambassador of Iran to the United Nations[59] Religious figures • Lady Amin (Banou Amin) (1886–1983), Iran’s most outstanding female jurisprudent, theologian and great Muslim mystic (‘arif), a Lady Mujtahideh • Amina Begum Bint al-Majlisi was a female Safavid mujtahideh • Ayatollah Mohammad Beheshti (1928–1981), cleric, Chairman of the Council of Revolution of Iran[57] • Abū Shujāʿ al-Iṣfahānī (5th c.) jurist and judge • Allamah al-Majlisi (1616–1698), Safavid cleric, Sheikh ul-Islam in Isfahan • Salman the Persian • Muhammad Ibn Manda (d. 1005 / AH 395), Sunni Hanbali scholar of hadith and historian • Abu Nu’aym Al-Ahbahani Al-Shafi’i (d. 1038 / AH 430), Sunni Shafi’i Scholar Sportspeople • Abdolali Changiz, football star of Esteghlal FC in the 1970s • Mansour Ebrahimzadeh, former player for Sepahan FC, former head coach of Zobahan • Ghasem Haddadifar, captain of Zobahan FC • Ehsan Hajsafi, player for the Sepahan and Olympiacos FC • Arsalan Kazemi, forward for the Oregon Ducks men’s basketball team and the Iran national basketball team • Rasoul Korbekandi, goalkeeper of the Iranian National Team • Moharram Navidkia, captain of Sepahan FC • Mohsen Sadeghzadeh, former captain of Iran national basketball team and Zobahan • Mohammad Talaei, world champion wrestler • Mahmoud Yavari (1939-), football player, coach of Iranian National Team • Sohrab Moradi (1988-), Olympic weightlifting gold medalist, world record holder of 105 kg category Writers and poets • Mohammad-Ali Jamālzādeh Esfahani (1892–1997), author • Hatef Esfehani, Persian Moral poet in the Afsharid Era • Zhaleh Esfahani (1921–2007), poet and writer[60] • Kamal ed-Din Esmail (late 12th century – early 13th century) • Houshang Golshiri (1938–2000), writer and editor • Hamid Mosadegh (1939–1998), poet and lawyer • Mirza Abbas Khan Sheida (1880–1949), poet and publisher • Saib Tabrizi • Afshin Yadollahi (1969–2017), poet and writer[61] Others • Abd-ol-Ghaffar Amilakhori, 17th-century noble • Adib Boroumand (1924-), poet, politician, lawyer, and leader of the National Front • George Bournoutian, professor, historian and author • Jesse of Kakheti, king of Kakheti in eastern Georgia from 1614 to 1615 • Simon II of Kartli, king of Kartli in eastern Georgia from 1619 to 1630/1631 • David II of Kakheti, king of Kakheti in eastern Georgia from 1709 to 1722 • Constantine II of Kakheti, king of Kakheti in eastern Georgia from 1722 to 1732 • Nasser David Khalili (1945–), property developer, art collector, and philanthropist • Arthur Pope (1881–1969), American archaeologist, buried near Khaju Bridge ## Sports Zob Ahan and Sepahan are the only Iranian clubs to reach the final of the new AFC Champions League. Isfahan has three association football clubs that play professionally. These are: • Sepahan Isfahan FC • Zob Ahan Isfahan FC • Giti Pasand Sepahan has won the most league titles among the Iranian clubs (2002–03, 2009–10, 2010–11, 2011–12 and 2014–15).[62] Giti Pasand also has a futsal team, Giti Pasand FSC, one of the best teams in Asia. They won the AFC Futsal Club Championship in 2012 and were runners-up in 2013. ## Municipal government ## Twin towns – sister cities Esfahan Street in Kuala Lumpur, and Kualalampur Avenue in Isfahan Isfahan is twinned with: Country City State / province / region / governorate Since China Xi’an Shaanxi Province 1989[63] Malaysia Kuala Lumpur Kuala Lumpur 1997[63] Germany Freiburg Baden-Württemberg State 2000[63] Italy Florence Florence Province 1998[63] Romania Iași Iași County 1999[63] Spain Barcelona Barcelona Province 2000[63] Armenia Yerevan Yerevan 2000[63] Kuwait Kuwait City Al Asimah Governorate 2000[63] Cuba Havana La Habana Province 2001[63] Pakistan Lahore Punjab Province 2004[63] Russia Saint Petersburg Northwestern Federal District 2004[63] Senegal Dakar Dakar Region 2009[63] Lebanon Baalbek Baalbek-Hermel Governorate 2010[63] South Korea Gyeongju North Gyeongsang Province 2017[64] ## See also • List of the historical structures in the Isfahan province • Islamic City Council of Isfahan • 15861 Ispahan • New Julfa • Prix d’Ispahan ## References Notes 1. ^ http://www.daftlogic.com/downloads/kml/10102015-9mzrdauu.kml[permanent dead link] 2. ^ https://www.amar.org.ir/english 3. ^ “Major Agglomerations of the World – Population Statistics and Maps”. citypopulation.de. 2018-09-13. Archived from the original on 2018-09-13..mw-parser-output cite.citation{font-style:inherit}.mw-parser-output q{quotes:”””””””‘””‘”}.mw-parser-output code.cs1-code{color:inherit;background:inherit;border:inherit;padding:inherit}.mw-parser-output .cs1-lock-free a{background:url(“//upload.wikimedia.org/wikipedia/commons/thumb/6/65/Lock-green.svg/9px-Lock-green.svg.png”)no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-limited a,.mw-parser-output .cs1-lock-registration a{background:url(“//upload.wikimedia.org/wikipedia/commons/thumb/d/d6/Lock-gray-alt-2.svg/9px-Lock-gray-alt-2.svg.png”)no-repeat;background-position:right .1em center}.mw-parser-output .cs1-lock-subscription a{background:url(“//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Lock-red-alt-2.svg/9px-Lock-red-alt-2.svg.png”)no-repeat;background-position:right .1em center}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration{color:#555}.mw-parser-output .cs1-subscription span,.mw-parser-output .cs1-registration span{border-bottom:1px dotted;cursor:help}.mw-parser-output .cs1-hidden-error{display:none;font-size:100%}.mw-parser-output .cs1-visible-error{font-size:100%}.mw-parser-output .cs1-subscription,.mw-parser-output .cs1-registration,.mw-parser-output .cs1-format{font-size:95%}.mw-parser-output .cs1-kern-left,.mw-parser-output .cs1-kern-wl-left{padding-left:0.2em}.mw-parser-output .cs1-kern-right,.mw-parser-output .cs1-kern-wl-right{padding-right:0.2em} 4. ^ “Population of Cities in Iran (2018).” The population of the greater metropolitan area is 5.1 million (2016 Census). 5. ^ “Isfahan Is Half The World”. New York Times. Retrieved 23 July 2018. 6. ^ “Isfahan, Pre-Islamic-Period”. Encyclopædia Iranica. 15 December 2006. Retrieved 31 December 2015. 7. ^ Strazny, P. (2005). Encyclopedia of linguistics (p. 325). New York: Fitzroy Dearborn. 8. ^ Sacred Precincts: The Religious Architecture of Non-Muslim Communities Across the Islamic World, Gharipour Mohammad, BRILL, Nov 14, 2014, p. 179. 9. ^ “Archived copy”. Archived from the original on 23 October 2013. Retrieved 15 July 2013.CS1 maint: Archived copy as title (link) 10. ^ Salma, K. Jayyusi; Holod, Renata; Petruccioli, Attilio; André, Raymond (2008). The City in the Islamic World. Leiden: Brill. p. 174. ISBN 9789004162402. 11. ^ “Britannica.com”. 12. ^ Battutah, Ibn (2002). The Travels of Ibn Battutah. London: Picador. p. 68. ISBN 9780330418799. 13. ^ abcdefg electricpulp.com. “ISFAHAN vii. SAFAVID PERIOD – Encyclopaedia Iranica”. 14. ^ Matthee 2012, p. 67. 15. ^ ab Aslanian, Sebouh (2011). From the Indian Ocean to the Mediterranean: The Global Trade Networks of Armenian Merchants from New Julfa. California: University of California Press. p. 1. ISBN 978-0520947573. 16. ^ abc Bournoutian, George (2002). A Concise History of the Armenian People: (from Ancient Times to the Present) (2 ed.). Mazda Publishers. p. 208. ISBN 978-1568591414. 17. ^ Iran Almanac and Book of Facts. 8. Echo Institute. 1969. p. 71. OCLC 760026638. 18. ^ “MSC at a Glance”. Retrieved 19 July 2017. 19. ^ “Esfahan Steel Company A Pioneer in The Steel Industry of Iran”. Retrieved 19 July 2017. 20. ^ Hesaco.com (from the HESA official company website) 21. ^ Pike, John. “HESA Iran Aircraft Manufacturing Industrial Company”. 22. ^ “International conference held on investment opportunities in Iran tourism industry”. 23. ^ DEPARTMENT-it@isfahancitycenter.com, IT. “صفحه اصلی بزرگترین مرکز خرید ایران”. 24. ^ “About Isfahan City Center”. Retrieved 16 August 2017. 25. ^ “Snowy days for Esfahan”. Irimo.ir. Archived from the original on 26 April 2012. Retrieved 23 April 2012. 26. ^ “Esfahan Climate Normals 1961-1990”. National Oceanic and Atmospheric Administration. Retrieved 8 April 2015. 27. ^ “Highest record temperature in Esfahan by Month 1951–2010”. Iran Meteorological Organization. Retrieved 8 April 2015. 28. ^ “Lowest record temperature in Esfahan by Month 1951–2010”. Iran Meteorological Organization. Retrieved 8 April 2015. 29. ^ “چرا آلودگی هوای اصفهان از تهران بیشتر است؟”. Retrieved 29 June 2018. 30. ^ Assari, A., Mahesh, T., Emtehani, M., & Assari, E. (2011). Comparative sustainability of bazaar in Iranian traditional cities: Case studies in Isfahan and Tabriz. International Journal on Technical and Physical Problems of Engineering (IJTPE)(9), 18-24. 31. ^ “Isfahan Jame(Congregative) mosque – BackPack”. Fz-az.fotopages.com. Retrieved 2009-07-26. 32. ^ “Seifolddini-Faranak; M. S. Fard; Hosseini Ali” (PDF). thescipub.com. 33. ^ Assari, Ali; T.M. Mahesh (January 2012). “Conservation of historic urban core in traditional Islamic culture: case study of Isfahan city” (PDF). Indian Journal of Science and Technology. 5 (1): 1970–1976. Archived from the original (PDF) on 27 October 2012. Retrieved 7 January 2013. 34. ^ “Castles of the Fields”. Saudi Aramco World. Archived from the original on 7 October 2012. Retrieved 11 September 2012. 35. ^ “Isfahan Technical and Vocational Training Organisation”. Web.archive.org. 8 October 2007. Archived from the original on 8 October 2007. Retrieved 2012-04-23. 36. ^ Assari, Ali; Erfan Assari (2012). “Urban spirit and heritage conservation problems: case study Isfahan city in Iran” (PDF). Journal of American Science. 8 (1): 203–209. Retrieved 7 January 2013. 37. ^ Ltd, DVV Media International. “Esfahan metro opens”. Railway Gazette. Retrieved 2018-08-02. 38. ^ “نگاهی به زندگی و کارنامه هنری استاد جلال تاج”. Retrieved 14 July 2017. 39. ^ “بیوگرافی محمد اصفهانی و همسرش”. Retrieved 18 May 2018. 40. ^ “بیوگرافی علیرضا افتخاری”. Archived from the original on 2014-03-05. Retrieved 14 July 2017. 41. ^ “بیوگرافی لیلا فروهر / عکس · جدید 96 -گهر”. Retrieved 14 July 2017. 42. ^ “Hassan Kassai”. Retrieved 14 July 2017. 43. ^ “بیوگرافی و شرح زندگی معین”. Retrieved 13 August 2017. 44. ^ “بیوگرافی حسام الدین سراج”. Retrieved 14 July 2017. 45. ^ “بیوگرافی حسن شماعی زاده”. Retrieved 31 August 2017. 46. ^ “شهسوار تار”. Retrieved 15 July 2017. 47. ^ “سارا بهرامی+بیوگرافی”. Retrieved 20 August 2018. 48. ^ “بیوگرافی اصغر فرهادی – زومجی”. Archived from the original on 2016-07-31. Retrieved 15 July 2017. 49. ^ “بیوگرافی “محمد علی کشاورز” + عکس”. Retrieved 20 August 2018. 50. ^ “بیوگرافی مهدی پاکدل و همسرش”. Retrieved 20 August 2018. 51. ^ “بیوگرافی کیومرث پور احمد”. Retrieved 20 August 2018. 52. ^ “بیوگرافی کامل سروش صحت + عکس”. Retrieved 15 July 2017. 53. ^ “Reza Badrossama Biography”. Retrieved 17 July 2017. 54. ^ “استاد محمود دهنوی”. Retrieved 17 July 2017. 55. ^ “مروري كوتاه بر زندگي‌نامه استاد محمود فرشچيان”. Archived from the original on 2013-10-19. Retrieved 15 July 2017. 56. ^ “Abstract paintings and conceptual spiritual art by Freydoon Rassouli”. Retrieved 15 July 2017. 57. ^ ab “زندگی نامه شهید بهشتی”. Retrieved 31 August 2017. 58. ^ “حسین خرازی که بود و چگونه به شهادت رسید؟”. Retrieved 31 August 2017. 59. ^ “ناشنیده‌هایی از زندگی “ظریف” در روز تولدش”. Retrieved 17 August 2017. 60. ^ “اشعار زیبا و کوتاه ژاله اصفهانی”. Retrieved 27 October 2018. 61. ^ “افشین یداللهی را بهتر بشناسید”. Retrieved 20 August 2018. 62. ^ “گزارشی از تاریخ قهرمانان ایران؛ پرسپولیس بهترین تیم تاریخ، سپاهان برترین تیم لیگ/ یک آبی‌ در صدر”. Retrieved 3 September 2017. 63. ^ abcdefghijklm “خواهر خوانده های اصفهان”. Archived from the original on 2010-02-25. Retrieved 10 July 2017. 64. ^ “گوانجو کره جنوبی پانزدهمین خواهرخوانده اصفهان”. 11 March 2017. http://www.hourgasht.ir/city/turisminfo/20053 • Dehghan, Maziar (2014). Management in IRAN. ISBN 978-600-04-1573-0. ## Sources • Yves Bomati and Houchang Nahavandi,Shah Abbas, Emperor of Persia,1587-1629, 2017, ed. Ketab Corporation, Los Angeles, ISBN 978-1595845672, English translation by Azizeh Azodi. • Matthee, Rudi (2012). Persia in Crisis: Safavid Decline and the Fall of Isfahan. I.B.Tauris. ISBN 978-1845117450. ## External links Isfahan travel guide from Wikivoyage • Isfahan official website • Isfahan Metro • 360-degree panorama gallery of Isfahan • Isfahan Geometry on a Human Scale – a documentary film directed by Manouchehr Tayyab (30 min) • Well illustrated guide to Isfahan  Preceded byRey Capital of Seljuq Empire (Persia)1051–1118 Succeeded byHamadan (Western capital)Merv (Eastern capital) Preceded byQazvin Capital of Iran (Persia)1598–1736 Succeeded byMashhad Preceded byQazvin Capital of Safavid dynasty1598–1722 Succeeded by– ## Is partially claying okay? Clash Royale CLAN TAG#URR8PPP Can you partially ‘clay’ a car, only claying the problem areas? Or will you see differences in the end result (after waxing) between area’s clayed and not clayed? I’m just starting in car detailing. I’ve never used a clay bar before. Can you partially ‘clay’ a car, only claying the problem areas? Or will you see differences in the end result (after waxing) between area’s clayed and not clayed? I’m just starting in car detailing. I’ve never used a clay bar before. Can you partially ‘clay’ a car, only claying the problem areas? Or will you see differences in the end result (after waxing) between area’s clayed and not clayed? I’m just starting in car detailing. I’ve never used a clay bar before. Can you partially ‘clay’ a car, only claying the problem areas? Or will you see differences in the end result (after waxing) between area’s clayed and not clayed? I’m just starting in car detailing. I’ve never used a clay bar before. detailing edited Nov 23 at 13:31 Pᴀᴜʟsᴛᴇʀ2 107k16159350 107k16159350 asked Nov 23 at 13:20 svenema 423 423 ## 1 Answer1 active oldest votes If you only do part of the vehicle, how do you know you’ve gotten the part which is actually contaminated? If you clay a car, you want to do the whole thing. Claying removes the surface contaminants which can further harm the finish on your vehicle when you are washing it. The contaminants act to the dull the appearance. If you are only going to do part of the surface, the rest of the vehicle will still have contaminants which will leave that part of the car looking dull as well as the possibility of you pulling some of those contaminants off and causing damage to the car finish. If you’re going to use a clay bar, don’t go 1/2 way … take care of business and do the entire car. • While I think you mostly nailed this, certainly there’s no reason to only clay part of the car in 99.99% scenarios of claying – really claying should come after washing. – motosubatsu Nov 23 at 14:14 • @motosubatsu – You are right! I’ve updated my response. – Pᴀᴜʟsᴛᴇʀ2 Nov 23 at 14:18 ## 1 Answer1 active oldest votes ## 1 Answer1 active oldest votes active oldest votes active oldest votes If you only do part of the vehicle, how do you know you’ve gotten the part which is actually contaminated? If you clay a car, you want to do the whole thing. Claying removes the surface contaminants which can further harm the finish on your vehicle when you are washing it. The contaminants act to the dull the appearance. If you are only going to do part of the surface, the rest of the vehicle will still have contaminants which will leave that part of the car looking dull as well as the possibility of you pulling some of those contaminants off and causing damage to the car finish. If you’re going to use a clay bar, don’t go 1/2 way … take care of business and do the entire car. • While I think you mostly nailed this, certainly there’s no reason to only clay part of the car in 99.99% scenarios of claying – really claying should come after washing. – motosubatsu Nov 23 at 14:14 • @motosubatsu – You are right! I’ve updated my response. – Pᴀᴜʟsᴛᴇʀ2 Nov 23 at 14:18 If you only do part of the vehicle, how do you know you’ve gotten the part which is actually contaminated? If you clay a car, you want to do the whole thing. Claying removes the surface contaminants which can further harm the finish on your vehicle when you are washing it. The contaminants act to the dull the appearance. If you are only going to do part of the surface, the rest of the vehicle will still have contaminants which will leave that part of the car looking dull as well as the possibility of you pulling some of those contaminants off and causing damage to the car finish. If you’re going to use a clay bar, don’t go 1/2 way … take care of business and do the entire car. • While I think you mostly nailed this, certainly there’s no reason to only clay part of the car in 99.99% scenarios of claying – really claying should come after washing. – motosubatsu Nov 23 at 14:14 • @motosubatsu – You are right! I’ve updated my response. – Pᴀᴜʟsᴛᴇʀ2 Nov 23 at 14:18 If you only do part of the vehicle, how do you know you’ve gotten the part which is actually contaminated? If you clay a car, you want to do the whole thing. Claying removes the surface contaminants which can further harm the finish on your vehicle when you are washing it. The contaminants act to the dull the appearance. If you are only going to do part of the surface, the rest of the vehicle will still have contaminants which will leave that part of the car looking dull as well as the possibility of you pulling some of those contaminants off and causing damage to the car finish. If you’re going to use a clay bar, don’t go 1/2 way … take care of business and do the entire car. If you only do part of the vehicle, how do you know you’ve gotten the part which is actually contaminated? If you clay a car, you want to do the whole thing. Claying removes the surface contaminants which can further harm the finish on your vehicle when you are washing it. The contaminants act to the dull the appearance. If you are only going to do part of the surface, the rest of the vehicle will still have contaminants which will leave that part of the car looking dull as well as the possibility of you pulling some of those contaminants off and causing damage to the car finish. If you’re going to use a clay bar, don’t go 1/2 way … take care of business and do the entire car. edited Nov 23 at 14:18 answered Nov 23 at 13:31 Pᴀᴜʟsᴛᴇʀ2 107k16159350 107k16159350 • While I think you mostly nailed this, certainly there’s no reason to only clay part of the car in 99.99% scenarios of claying – really claying should come after washing. – motosubatsu Nov 23 at 14:14 • @motosubatsu – You are right! I’ve updated my response. – Pᴀᴜʟsᴛᴇʀ2 Nov 23 at 14:18 • While I think you mostly nailed this, certainly there’s no reason to only clay part of the car in 99.99% scenarios of claying – really claying should come after washing. – motosubatsu Nov 23 at 14:14 • @motosubatsu – You are right! I’ve updated my response. – Pᴀᴜʟsᴛᴇʀ2 Nov 23 at 14:18 While I think you mostly nailed this, certainly there’s no reason to only clay part of the car in 99.99% scenarios of claying – really claying should come after washing. – motosubatsu Nov 23 at 14:14 While I think you mostly nailed this, certainly there’s no reason to only clay part of the car in 99.99% scenarios of claying – really claying should come after washing. – motosubatsu Nov 23 at 14:14 @motosubatsu – You are right! I’ve updated my response. – Pᴀᴜʟsᴛᴇʀ2 Nov 23 at 14:18 @motosubatsu – You are right! I’ve updated my response. – Pᴀᴜʟsᴛᴇʀ2 Nov 23 at 14:18 Thanks for contributing an answer to Motor Vehicle Maintenance & Repair Stack Exchange! • Please be sure to answer the question. Provide details and share your research! But avoid • Asking for help, clarification, or responding to other answers. • Making statements based on opinion; back them up with references or personal experience. To learn more, see our tips on writing great answers. Some of your past answers have not been well-received, and you’re in danger of being blocked from answering. Please pay close attention to the following guidance: • Please be sure to answer the question. Provide details and share your research! But avoid • Asking for help, clarification, or responding to other answers. • Making statements based on opinion; back them up with references or personal experience. To learn more, see our tips on writing great answers. draft saved draft discarded StackExchange.ready( function () { StackExchange.openid.initPostLogin(‘.new-post-login’, ‘https%3a%2f%2fmechanics.stackexchange.com%2fquestions%2f61163%2fis-partially-claying-okay%23new-answer’, ‘question_page’); } ); ### Post as a guest Required, but never shown Required, but never shown Required, but never shown Required, but never shown Required, but never shown Required, but never shown Required, but never shown Required, but never shown Required, but never shown ## Kilij Arslan I Kilij Arslan I Seljuq sultans of Rum Reign 1092–1107 Predecessor Suleyman I Successor Melikshah Born 1079 Died 1107 (aged 27–28) Khabur River, near Mosul House House of Seljuq Father Suleyman I of Rûm Kilij Arslan (Old Anatolian Turkish: قِلِج اَرسلان; Persian: قلج ارسلانQilij Arslān; Modern Turkish: Kılıç Arslan, meaning “Sword Lion”) (‎1079–1107) was the Seljuq Sultan of Rûm from 1092 until his death in 1107. He ruled the Sultanate during the time of the First Crusade and thus faced the attack.[1] He also re-established the Sultanate of Rum after the death of Malik Shah I of Great Seljuq and defeated the Crusaders in three battles during the Crusade of 1101. ## Contents • 1 Rise to power • 2 The Crusades • 2.1 People’s Crusade • 2.2 First Crusade • 2.3 Crusade of 1101 • 3 War and death in Syria • 4 References ## Rise to power After the death of his father, Suleyman, in 1086, he became a hostage of Sultan Malik Shah I of Great Seljuq, but was released when Malik Shah died in 1092. Kilij Arslan then marched at the head of the Turkish Oghuz Yiva tribe army and set up his capital at Nicaea, replacing Amin ‘l Ghazni, the governor appointed by Malik Shah I. Following the death of Malik Shah I the individual tribes, the Danishmends, Mangujekids, Saltuqids, Chaka, Tengribirmish begs, Artuqids (Ortoqids) and Akhlat-Shahs, had started vying with each other to establish their own independent states. Alexius Comnenus’s Byzantine intrigues further complicated the situation. He married the daughter of the Emir of the Chaka to attempt to ally himself against the Byzantines, who commanded a strong naval fleet. In 1094, Kilij Arslan received a letter from Alexius suggesting that the Chaka sought to target him to move onto the Byzantines, thereupon Kilij Arslan marched with an army to Smyrna, Chaka’s capital, and invited his father-in-law to a banquet in his tent where he slew him while he was intoxicated. ## The Crusades ### People’s Crusade The People’s Crusade (also called the Peasants’ Crusade) army of Peter the Hermit and Walter the Penniless arrived at Nicaea in 1096. A German contingent of the crusade overran the castle Xerigordon and held it until Kilij sent a force to starve them out. Those that renounced Christianity were spared and sent into captivity to the east, the rest were slaughtered.[2] The remainder of Peter’s crusade was surprised near the village of Dracon by Kilij Arslan’s army.[3] They were easily defeated and around 30,000 men, women and children were killed.[4] He then invaded the Danishmend Emirate of Malik Ghazi in eastern Anatolia. ### First Crusade Because of this easy first victory he did not consider the main crusader army, led by various nobles of western Europe, to be a serious threat. He resumed his war with the Danishmends, and was away from Nicaea when these new Crusaders besieged Nicaea in May 1097. He hurried back to his capital to find it surrounded by the Crusaders, and was defeated in battle with them on May 21. The city then surrendered to the Byzantines and his wife and children were captured. When the crusaders sent the Sultana to Constantinople, to their dismay she was later returned without ransom in 1097 because of the relationship between Kilij Arslan and Alexius Comnenus. As result of the stronger invasion, Rüm and the Danismends allied in their attempt to turn back the crusaders. The Crusaders continued to split their forces as they marched across Anatolia. The combined Danishmend and Rüm forces planned to ambush the Crusaders near Dorylaeum on June 29. However, Kilij Arslan’s horse archers could not penetrate the line of defense set up by the Crusader knights, and the main body under Bohemund arrived to capture the Turkish camp on July 1. Kilij Arslan retreated and inflicted losses on the Crusader Army with guerilla warfare and hit-and-run tactics. He also destroyed crops and water supplies along their route in order to damage logistical supplying of the Crusader Army. See also: Siege of Nicaea, Battle of Dorylaeum ### Crusade of 1101 Crusade of 1101 Ghazni ibn Danishmend captured Bohemond resulting in a new force of Lombards attempting to rescue him. In their march they took Ankara from Arslan upon the Danishmends. In alliance with Radwan the Atabeg of Aleppo he ambushed this force at the Battle of Mersivan. In 1101 he defeated another Crusader army at Heraclea Cybistra, which had come to assist the fledging Crusader States in Syria. This was an important victory for the Turks, as it proved that an army of Crusader knights were not invincible. After this victory he moved his capital to Konya and defeated a force led by William II of Nevers who attempted to march upon it as well as the subsequent force a week later. In 1104 he resumed once more his war with the Danishmends who were now weakened after the death of Malik Ghazi, demanding half the ransom gained for Bohemund. As a result, Bohemund allied with the Danishmends against Rüm and the Byzantines. ## War and death in Syria After the crusades he moved towards the east taking Harran, and Diyarbakr. In 1107 he conquered Mosul, but he was defeated by Mehmed I of Great Seljuq supported by the Ortoqids and Fakhr al-Mulk Radwan of Aleppo at the battle of Khabur river.[5] Having lost the battle, Kilij Arslan died trying to escape across the river.[6] ## References 1. ^ Outline History of the Islamic World By Masudul Hasan, Abdul Waheed, p.159 2. ^ The First Crusade:Constantinople to Antioch, Steven Runciman, A History of the Crusades, Vol.1, Ed. Marshall W. Baldwin, (University of Wisconsin Press, 1969), 283. 3. ^ The First Crusade:Constantinople to Antioch, Steven Runciman, A History of the Crusades, Vol.1, 283. 4. ^ Jill N. Claster, Sacred violence: the European crusades to the Middle East, 1095-1396, (University of Toronto Press, 2009), 45. 5. ^ Anatolia in the Period of the Seljuks and the Beyliks, Osman Turan, The Cambridge History of Islam, Ed. Peter Malcolm Holt, Ann K. S. Lambton and Bernard Lewis, (Cambridge University Press, 1970), 239. 6. ^ Runciman, Steven, A History of the Crusades, vol. 2: The Kingdom of Jerusalem and the Frankish East, 1100-1187 (Cambridge University Press, 1951), 110.[1]  Preceded bySuleyman I Sultan of Rûm1092–1107 Succeeded byMelikshah ## Why can’t there be an error correcting code with fewer than 5 qubits? Clash Royale CLAN TAG#URR8PPP I read about 9-qubit, 7-qubit and 5-qubit error correcting codes lately. But why can there not be a quantum error correcting code with fewer than 5 qubits? • Removed the comment with a false claim. Refer to Niel’s accepted answer. – Jalex Stark Nov 26 at 18:29 I read about 9-qubit, 7-qubit and 5-qubit error correcting codes lately. But why can there not be a quantum error correcting code with fewer than 5 qubits? • Removed the comment with a false claim. Refer to Niel’s accepted answer. – Jalex Stark Nov 26 at 18:29 2 I read about 9-qubit, 7-qubit and 5-qubit error correcting codes lately. But why can there not be a quantum error correcting code with fewer than 5 qubits? I read about 9-qubit, 7-qubit and 5-qubit error correcting codes lately. But why can there not be a quantum error correcting code with fewer than 5 qubits? quantum-error-correction edited Nov 23 at 14:04 DaftWullie 11.1k1536 11.1k1536 asked Nov 23 at 12:53 Adex 1667 1667 • Removed the comment with a false claim. Refer to Niel’s accepted answer. – Jalex Stark Nov 26 at 18:29 • Removed the comment with a false claim. Refer to Niel’s accepted answer. – Jalex Stark Nov 26 at 18:29 2 Removed the comment with a false claim. Refer to Niel’s accepted answer. – Jalex Stark Nov 26 at 18:29 Removed the comment with a false claim. Refer to Niel’s accepted answer. – Jalex Stark Nov 26 at 18:29 ## 4 Answers4 active oldest votes ## A proof that you need at least 5 qubits (or qudits) Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $$d$$, and any quantum error correcting code protecting one or more qudits of dimension $$d$$. (As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill–Laflamme article [arXiv:quant-ph/9604034] which set out the Knill–Laflamme conditions: the following is the proof technique which is more commonly used nowadays.) Any quantum error correcting code which can correct $$t$$ unknown errors, can also correct up to $$2t$$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*. Slightly more generally, a quantum error correcting code of distance $$d$$ can tolerate $$d-1$$ erasure errors. For example, while the $$[![4,2,2]!]$$ code can’t correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case). It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits. Now: suppose you have a quantum error correcting code on $$n geqslant 2$$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $$n-2$$ qubits to Alice, and $$2$$ qubits to Bob: then Alice should be able to recover the original encoded state. If $$n<5$$, then $$2 geqslant n-2$$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice’s state. As this is ruled out by the No Cloning Theorem, it follows that we must have $$n geqslant 5$$ instead. ## On correcting erasure errors * The earliest reference I found for this is [1] Grassl, Beth, and Pellizzari. Codes for the Quantum Erasure Channel. Phys. Rev. A 56 (pp. 33–38), 1997. [arXiv:quant-ph/9610042] — which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $$d$$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators). • The loss of $$d-1$$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors. • If the locations of those $$d-1$$ qubits were unknown, this would be fatal. However, as their locations are known, any pair Pauli errors on $$d-1$$ qubits can be distinguished from one another, by appeal to the Knill-Laflamme conditions. • Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $$d-1$$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state. • N.B. If you’ve upvoted my answer, you should consider upvoting Felix Huber’s answer as well, for having identified the original proof. – Niel de Beaudrap 12 hours ago What we can easily prove is that there’s no smaller non-degenerate code. In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let’s say you had a 5 qubit code, with the two logical states $$|0_Lrangle$$ and $$|1_Lrangle$$. The set of possible single-qubit errors are $$X_1,X_2,ldots X_5,Y_1,Y_2,ldots,Y_5,Z_1,Z_2,ldots,Z_5$$, and it means that all the states $$|0_Lrangle,|1_Lrangle,X_1|0_Lrangle,X_1|1_Lrangle,X_2|0_Lrangle,ldots$$ must map to orthogonal states. If we apply this argument in general, it shows us that we need $$2+2times(3n)$$ distinct states. But, for $$n$$ qubits, the maximum number of distinct states is $$2^n$$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need $$2^ngeq 2(3n+1).$$ This is called the Quantum Hamming Bound. You can easily check that this is true for all $$ngeq 5$$, but not if $$n<5$$. Indeed, for $$n=5$$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result. • Can’t you prove this by no-cloning for any code, without invoking the Hamming bound? – Norbert Schuch Nov 23 at 23:12 • @NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it! – DaftWullie Nov 24 at 6:17 • Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool 🙂 – DaftWullie Nov 24 at 6:56 • Thought that was a standard argument 😮 – Norbert Schuch Nov 24 at 12:11 As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is $$begin{equation} 2^{n-k}geqsum_{j=0}^tpmatrix{n\j}3^j, end{equation}$$ where $$n$$ refers to the number of qubits that form the codewords, $$k$$ is the number of information qubits that are encoded (so they are protected from decoherence), and $$t$$ is the number of $$t$$-qubit errors corrected by the code. As $$t$$ is related with the distance by $$t = lfloorfrac{d-1}{2}rfloor$$, then such non-degenerate quantum code will be a $$[[n,k,d]]$$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $$2^n$$ dimensional Hilbert space is partitioned into $$2^{n-k}$$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That’s why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement. However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound. I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of A Theory of Quantum Error-Correcting Codes Emanuel Knill, Raymond Laflamme https://arxiv.org/abs/quant-ph/9604034  where the specific result is: Theorem 5.1. A $$(2^r,k)$$ $$e$$-error-correcting quantum code must satisfy $$r geqslant 4e + lceil log k rceil$$. Here, an $$(N,K)$$ code is an embedding of a $$K$$-dimensional subspace into an $$N$$-dimensional system; it is an $$e$$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $$e$$. In particular, a $$(2^n, 2^k)$$ $$e$$-error-correcting code is what we would now describe as an $$[![n,k,2e:!{+}1]!]$$ code. Theorem 5.1 then allows us to prove that for $$k geqslant 1$$ and $$d geqslant 3$$, an $$[![n,k,d]!]$$ code must satisfy begin{aligned} n ;⩾ 4bigllceiltfrac{d-1}{2}bigrrceil + lceil log 2^k rceil \[1ex]⩾ bigllceil 4 cdot tfrac{d-1}{2} bigrrceil + lceil k rceil \[1ex]&=; 2d – 2 + k ;geqslant; 6 – 2 + 1 ;=; 5. end{aligned} (N.B. There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.) As an alternative proof, I could imagine (but haven’t tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used Quantum MacWilliams Identities Peter Shor, Raymond Laflamme https://arxiv.org/abs/quant-ph/9610040  to show that no degenerate code on five qubits exists that can correct any single errors. • Excellent reference, thanks! I didn’t know the Knill–Laflamme paper well enough to know that the lower bound of 5 was there as well. – Niel de Beaudrap 12 hours ago • Thanks for editing! About the lower bound, it seems they don’t address that five qubits are needed, but only that such code must necessarily be non-degenerate. – Felix Huber 12 hours ago • As a side not, from the quantum Singleton bound alson=5$follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as$dleq n/2+1$already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill’s lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf – Felix Huber 12 hours ago • Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for$r leqslant 4; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes. – Niel de Beaudrap 12 hours ago • Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper (“a theory of QECC..), as they point out, general. – Felix Huber 12 hours ago ## 4 Answers4 active oldest votes ## 4 Answers4 active oldest votes active oldest votes active oldest votes ## A proof that you need at least 5 qubits (or qudits) Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $$d$$, and any quantum error correcting code protecting one or more qudits of dimension $$d$$. (As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill–Laflamme article [arXiv:quant-ph/9604034] which set out the Knill–Laflamme conditions: the following is the proof technique which is more commonly used nowadays.) Any quantum error correcting code which can correct $$t$$ unknown errors, can also correct up to $$2t$$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*. Slightly more generally, a quantum error correcting code of distance $$d$$ can tolerate $$d-1$$ erasure errors. For example, while the $$[![4,2,2]!]$$ code can’t correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case). It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits. Now: suppose you have a quantum error correcting code on $$n geqslant 2$$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $$n-2$$ qubits to Alice, and $$2$$ qubits to Bob: then Alice should be able to recover the original encoded state. If $$n<5$$, then $$2 geqslant n-2$$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice’s state. As this is ruled out by the No Cloning Theorem, it follows that we must have $$n geqslant 5$$ instead. ## On correcting erasure errors * The earliest reference I found for this is [1] Grassl, Beth, and Pellizzari. Codes for the Quantum Erasure Channel. Phys. Rev. A 56 (pp. 33–38), 1997. [arXiv:quant-ph/9610042] — which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $$d$$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators). • The loss of $$d-1$$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors. • If the locations of those $$d-1$$ qubits were unknown, this would be fatal. However, as their locations are known, any pair Pauli errors on $$d-1$$ qubits can be distinguished from one another, by appeal to the Knill-Laflamme conditions. • Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $$d-1$$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state. • N.B. If you’ve upvoted my answer, you should consider upvoting Felix Huber’s answer as well, for having identified the original proof. – Niel de Beaudrap 12 hours ago ## A proof that you need at least 5 qubits (or qudits) Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $$d$$, and any quantum error correcting code protecting one or more qudits of dimension $$d$$. (As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill–Laflamme article [arXiv:quant-ph/9604034] which set out the Knill–Laflamme conditions: the following is the proof technique which is more commonly used nowadays.) Any quantum error correcting code which can correct $$t$$ unknown errors, can also correct up to $$2t$$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*. Slightly more generally, a quantum error correcting code of distance $$d$$ can tolerate $$d-1$$ erasure errors. For example, while the $$[![4,2,2]!]$$ code can’t correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case). It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits. Now: suppose you have a quantum error correcting code on $$n geqslant 2$$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $$n-2$$ qubits to Alice, and $$2$$ qubits to Bob: then Alice should be able to recover the original encoded state. If $$n<5$$, then $$2 geqslant n-2$$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice’s state. As this is ruled out by the No Cloning Theorem, it follows that we must have $$n geqslant 5$$ instead. ## On correcting erasure errors * The earliest reference I found for this is [1] Grassl, Beth, and Pellizzari. Codes for the Quantum Erasure Channel. Phys. Rev. A 56 (pp. 33–38), 1997. [arXiv:quant-ph/9610042] — which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $$d$$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators). • The loss of $$d-1$$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors. • If the locations of those $$d-1$$ qubits were unknown, this would be fatal. However, as their locations are known, any pair Pauli errors on $$d-1$$ qubits can be distinguished from one another, by appeal to the Knill-Laflamme conditions. • Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $$d-1$$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state. • N.B. If you’ve upvoted my answer, you should consider upvoting Felix Huber’s answer as well, for having identified the original proof. – Niel de Beaudrap 12 hours ago ## A proof that you need at least 5 qubits (or qudits) Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $$d$$, and any quantum error correcting code protecting one or more qudits of dimension $$d$$. (As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill–Laflamme article [arXiv:quant-ph/9604034] which set out the Knill–Laflamme conditions: the following is the proof technique which is more commonly used nowadays.) Any quantum error correcting code which can correct $$t$$ unknown errors, can also correct up to $$2t$$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*. Slightly more generally, a quantum error correcting code of distance $$d$$ can tolerate $$d-1$$ erasure errors. For example, while the $$[![4,2,2]!]$$ code can’t correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case). It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits. Now: suppose you have a quantum error correcting code on $$n geqslant 2$$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $$n-2$$ qubits to Alice, and $$2$$ qubits to Bob: then Alice should be able to recover the original encoded state. If $$n<5$$, then $$2 geqslant n-2$$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice’s state. As this is ruled out by the No Cloning Theorem, it follows that we must have $$n geqslant 5$$ instead. ## On correcting erasure errors * The earliest reference I found for this is [1] Grassl, Beth, and Pellizzari. Codes for the Quantum Erasure Channel. Phys. Rev. A 56 (pp. 33–38), 1997. [arXiv:quant-ph/9610042] — which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $$d$$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators). • The loss of $$d-1$$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors. • If the locations of those $$d-1$$ qubits were unknown, this would be fatal. However, as their locations are known, any pair Pauli errors on $$d-1$$ qubits can be distinguished from one another, by appeal to the Knill-Laflamme conditions. • Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $$d-1$$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state. ## A proof that you need at least 5 qubits (or qudits) Here is a proof that any single-error correcting (i.e., distance 3) quantum error correcting code has at least 5 qubits. In fact, this generalises to qudits of any dimension $$d$$, and any quantum error correcting code protecting one or more qudits of dimension $$d$$. (As Felix Huber notes, the original proof that you require at least 5 qubits is due to the Knill–Laflamme article [arXiv:quant-ph/9604034] which set out the Knill–Laflamme conditions: the following is the proof technique which is more commonly used nowadays.) Any quantum error correcting code which can correct $$t$$ unknown errors, can also correct up to $$2t$$ erasure errors (where we simply lose some qubit, or it becomes completely depolarised, or similar) if the locations of the erased qubits are known. [1, Sec. III A]*. Slightly more generally, a quantum error correcting code of distance $$d$$ can tolerate $$d-1$$ erasure errors. For example, while the $$[![4,2,2]!]$$ code can’t correct any errors at all, in essence because it can tell an error has happened (and even which type of error) but not which qubit it has happened to, that same code can protect against a single erasure error (because by hypothesis we know precisely where the error occurs in this case). It follows that any quantum error correcting code which can tolerate one Pauli error, can recover from the loss of two qubits. Now: suppose you have a quantum error correcting code on $$n geqslant 2$$ qubits, encoding one qubit against single-qubit errors. Suppose that you give $$n-2$$ qubits to Alice, and $$2$$ qubits to Bob: then Alice should be able to recover the original encoded state. If $$n<5$$, then $$2 geqslant n-2$$, so that Bob should also be able to recover the original encoded state — thereby obtaining a clone of Alice’s state. As this is ruled out by the No Cloning Theorem, it follows that we must have $$n geqslant 5$$ instead. ## On correcting erasure errors * The earliest reference I found for this is [1] Grassl, Beth, and Pellizzari. Codes for the Quantum Erasure Channel. Phys. Rev. A 56 (pp. 33–38), 1997. [arXiv:quant-ph/9610042] — which is not much long after the Knill–Laflamme conditions were described in [arXiv:quant-ph/9604034] and so plausibly the original proof of the connection between code distance and erasure errors. The outline is as follows, and applies to error correcting codes of distance $$d$$ (and applies equally well to qudits of any dimension in place of qubits, using generalised Pauli operators). • The loss of $$d-1$$ qubits can be modelled by those qubits being subject to the completely depolarising channel, which in turn can be modeled by those qubits being subject to uniformly random Pauli errors. • If the locations of those $$d-1$$ qubits were unknown, this would be fatal. However, as their locations are known, any pair Pauli errors on $$d-1$$ qubits can be distinguished from one another, by appeal to the Knill-Laflamme conditions. • Therefore, by substituting the erased qubits with qubits in the maximally mixed state and testing for Pauli errors on those $$d-1$$ qubits specificaly (requiring a different correction procedure than you would use for correcting arbitrary Pauli errors, mind you), you can recover the original state. edited 12 hours ago answered Nov 23 at 22:04 Niel de Beaudrap 5,3061932 5,3061932 • N.B. If you’ve upvoted my answer, you should consider upvoting Felix Huber’s answer as well, for having identified the original proof. – Niel de Beaudrap 12 hours ago • N.B. If you’ve upvoted my answer, you should consider upvoting Felix Huber’s answer as well, for having identified the original proof. – Niel de Beaudrap 12 hours ago N.B. If you’ve upvoted my answer, you should consider upvoting Felix Huber’s answer as well, for having identified the original proof. – Niel de Beaudrap 12 hours ago N.B. If you’ve upvoted my answer, you should consider upvoting Felix Huber’s answer as well, for having identified the original proof. – Niel de Beaudrap 12 hours ago What we can easily prove is that there’s no smaller non-degenerate code. In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let’s say you had a 5 qubit code, with the two logical states $$|0_Lrangle$$ and $$|1_Lrangle$$. The set of possible single-qubit errors are $$X_1,X_2,ldots X_5,Y_1,Y_2,ldots,Y_5,Z_1,Z_2,ldots,Z_5$$, and it means that all the states $$|0_Lrangle,|1_Lrangle,X_1|0_Lrangle,X_1|1_Lrangle,X_2|0_Lrangle,ldots$$ must map to orthogonal states. If we apply this argument in general, it shows us that we need $$2+2times(3n)$$ distinct states. But, for $$n$$ qubits, the maximum number of distinct states is $$2^n$$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need $$2^ngeq 2(3n+1).$$ This is called the Quantum Hamming Bound. You can easily check that this is true for all $$ngeq 5$$, but not if $$n<5$$. Indeed, for $$n=5$$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result. • Can’t you prove this by no-cloning for any code, without invoking the Hamming bound? – Norbert Schuch Nov 23 at 23:12 • @NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it! – DaftWullie Nov 24 at 6:17 • Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool 🙂 – DaftWullie Nov 24 at 6:56 • Thought that was a standard argument 😮 – Norbert Schuch Nov 24 at 12:11 What we can easily prove is that there’s no smaller non-degenerate code. In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let’s say you had a 5 qubit code, with the two logical states $$|0_Lrangle$$ and $$|1_Lrangle$$. The set of possible single-qubit errors are $$X_1,X_2,ldots X_5,Y_1,Y_2,ldots,Y_5,Z_1,Z_2,ldots,Z_5$$, and it means that all the states $$|0_Lrangle,|1_Lrangle,X_1|0_Lrangle,X_1|1_Lrangle,X_2|0_Lrangle,ldots$$ must map to orthogonal states. If we apply this argument in general, it shows us that we need $$2+2times(3n)$$ distinct states. But, for $$n$$ qubits, the maximum number of distinct states is $$2^n$$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need $$2^ngeq 2(3n+1).$$ This is called the Quantum Hamming Bound. You can easily check that this is true for all $$ngeq 5$$, but not if $$n<5$$. Indeed, for $$n=5$$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result. • Can’t you prove this by no-cloning for any code, without invoking the Hamming bound? – Norbert Schuch Nov 23 at 23:12 • @NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it! – DaftWullie Nov 24 at 6:17 • Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool 🙂 – DaftWullie Nov 24 at 6:56 • Thought that was a standard argument 😮 – Norbert Schuch Nov 24 at 12:11 What we can easily prove is that there’s no smaller non-degenerate code. In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let’s say you had a 5 qubit code, with the two logical states $$|0_Lrangle$$ and $$|1_Lrangle$$. The set of possible single-qubit errors are $$X_1,X_2,ldots X_5,Y_1,Y_2,ldots,Y_5,Z_1,Z_2,ldots,Z_5$$, and it means that all the states $$|0_Lrangle,|1_Lrangle,X_1|0_Lrangle,X_1|1_Lrangle,X_2|0_Lrangle,ldots$$ must map to orthogonal states. If we apply this argument in general, it shows us that we need $$2+2times(3n)$$ distinct states. But, for $$n$$ qubits, the maximum number of distinct states is $$2^n$$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need $$2^ngeq 2(3n+1).$$ This is called the Quantum Hamming Bound. You can easily check that this is true for all $$ngeq 5$$, but not if $$n<5$$. Indeed, for $$n=5$$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result. What we can easily prove is that there’s no smaller non-degenerate code. In a non-degenerate code, you have to have the 2 logical states of the qubit, and you have to have a distinct state for each possible error to map each logical state into. So, let’s say you had a 5 qubit code, with the two logical states $$|0_Lrangle$$ and $$|1_Lrangle$$. The set of possible single-qubit errors are $$X_1,X_2,ldots X_5,Y_1,Y_2,ldots,Y_5,Z_1,Z_2,ldots,Z_5$$, and it means that all the states $$|0_Lrangle,|1_Lrangle,X_1|0_Lrangle,X_1|1_Lrangle,X_2|0_Lrangle,ldots$$ must map to orthogonal states. If we apply this argument in general, it shows us that we need $$2+2times(3n)$$ distinct states. But, for $$n$$ qubits, the maximum number of distinct states is $$2^n$$. So, for a non-degenerate error correct code of distance 3 (i.e. correcting for at least one error) or greater, we need $$2^ngeq 2(3n+1).$$ This is called the Quantum Hamming Bound. You can easily check that this is true for all $$ngeq 5$$, but not if $$n<5$$. Indeed, for $$n=5$$, the inequality is an equality, and we call the corresponding 5-qubit code the perfect code as a result. edited Nov 26 at 7:49 answered Nov 23 at 13:17 DaftWullie 11.1k1536 11.1k1536 • Can’t you prove this by no-cloning for any code, without invoking the Hamming bound? – Norbert Schuch Nov 23 at 23:12 • @NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it! – DaftWullie Nov 24 at 6:17 • Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool 🙂 – DaftWullie Nov 24 at 6:56 • Thought that was a standard argument 😮 – Norbert Schuch Nov 24 at 12:11 • Can’t you prove this by no-cloning for any code, without invoking the Hamming bound? – Norbert Schuch Nov 23 at 23:12 • @NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it! – DaftWullie Nov 24 at 6:17 • Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool 🙂 – DaftWullie Nov 24 at 6:56 • Thought that was a standard argument 😮 – Norbert Schuch Nov 24 at 12:11 1 Can’t you prove this by no-cloning for any code, without invoking the Hamming bound? – Norbert Schuch Nov 23 at 23:12 Can’t you prove this by no-cloning for any code, without invoking the Hamming bound? – Norbert Schuch Nov 23 at 23:12 @NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it! – DaftWullie Nov 24 at 6:17 @NorbertSchuch the only proof I know involving cloning just shows that an n qubit code cannot correct for n/2 or more errors. If you know another construction, I’d be very happy to learn it! – DaftWullie Nov 24 at 6:17 Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool 🙂 – DaftWullie Nov 24 at 6:56 Ah, I see that’s the point of @NieldeBeaudrap’s answer. Cool 🙂 – DaftWullie Nov 24 at 6:56 1 Thought that was a standard argument 😮 – Norbert Schuch Nov 24 at 12:11 Thought that was a standard argument 😮 – Norbert Schuch Nov 24 at 12:11 As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is $$begin{equation} 2^{n-k}geqsum_{j=0}^tpmatrix{n\j}3^j, end{equation}$$ where $$n$$ refers to the number of qubits that form the codewords, $$k$$ is the number of information qubits that are encoded (so they are protected from decoherence), and $$t$$ is the number of $$t$$-qubit errors corrected by the code. As $$t$$ is related with the distance by $$t = lfloorfrac{d-1}{2}rfloor$$, then such non-degenerate quantum code will be a $$[[n,k,d]]$$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $$2^n$$ dimensional Hilbert space is partitioned into $$2^{n-k}$$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That’s why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement. However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound. As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is $$begin{equation} 2^{n-k}geqsum_{j=0}^tpmatrix{n\j}3^j, end{equation}$$ where $$n$$ refers to the number of qubits that form the codewords, $$k$$ is the number of information qubits that are encoded (so they are protected from decoherence), and $$t$$ is the number of $$t$$-qubit errors corrected by the code. As $$t$$ is related with the distance by $$t = lfloorfrac{d-1}{2}rfloor$$, then such non-degenerate quantum code will be a $$[[n,k,d]]$$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $$2^n$$ dimensional Hilbert space is partitioned into $$2^{n-k}$$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That’s why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement. However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound. As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is $$begin{equation} 2^{n-k}geqsum_{j=0}^tpmatrix{n\j}3^j, end{equation}$$ where $$n$$ refers to the number of qubits that form the codewords, $$k$$ is the number of information qubits that are encoded (so they are protected from decoherence), and $$t$$ is the number of $$t$$-qubit errors corrected by the code. As $$t$$ is related with the distance by $$t = lfloorfrac{d-1}{2}rfloor$$, then such non-degenerate quantum code will be a $$[[n,k,d]]$$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $$2^n$$ dimensional Hilbert space is partitioned into $$2^{n-k}$$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That’s why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement. However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound. As a complement to the other answer, I am going to add the general quantum Hamming bound for quantum non-degenerate error correction codes. The mathematical formulation of such bound is $$begin{equation} 2^{n-k}geqsum_{j=0}^tpmatrix{n\j}3^j, end{equation}$$ where $$n$$ refers to the number of qubits that form the codewords, $$k$$ is the number of information qubits that are encoded (so they are protected from decoherence), and $$t$$ is the number of $$t$$-qubit errors corrected by the code. As $$t$$ is related with the distance by $$t = lfloorfrac{d-1}{2}rfloor$$, then such non-degenerate quantum code will be a $$[[n,k,d]]$$ quantum error correction code. This bound is obtained by using an sphere-packing like argument, so that the $$2^n$$ dimensional Hilbert space is partitioned into $$2^{n-k}$$ spaces each deistinguished by the syndrome measured, and so one error is assigned to each of the syndromes, and the recovery operation is done by inverting the error associated with such measured syndrome. That’s why the number of total errors corrected by a non-degenerate quantum code should be less or equal to the number of partitions by the syndrome measurement. However, degeneracy is a property of quantum error correction codes that imply the fact that there are classes of equivalence between the errors that can affect the codewords sent. This means that there are errors whose effect on the transmitted codewords is the same while sharing the same syndrome. This implies that those classes of degenerate errors are corrected via the same recovery operation, and so more errors that expected can be corrected. That is why it is not known if the quantum Hamming bound holds for this degenerate error correction codes, as more errors than the partitions can be corrected this way. Please refer to this question for some information about the violation of the quantum Hamming bound. answered Nov 23 at 14:45 Josu Etxezarreta Martinez 1,333217 1,333217 I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of A Theory of Quantum Error-Correcting Codes Emanuel Knill, Raymond Laflamme https://arxiv.org/abs/quant-ph/9604034  where the specific result is: Theorem 5.1. A $$(2^r,k)$$ $$e$$-error-correcting quantum code must satisfy $$r geqslant 4e + lceil log k rceil$$. Here, an $$(N,K)$$ code is an embedding of a $$K$$-dimensional subspace into an $$N$$-dimensional system; it is an $$e$$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $$e$$. In particular, a $$(2^n, 2^k)$$ $$e$$-error-correcting code is what we would now describe as an $$[![n,k,2e:!{+}1]!]$$ code. Theorem 5.1 then allows us to prove that for $$k geqslant 1$$ and $$d geqslant 3$$, an $$[![n,k,d]!]$$ code must satisfy begin{aligned} n ;⩾ 4bigllceiltfrac{d-1}{2}bigrrceil + lceil log 2^k rceil \[1ex]⩾ bigllceil 4 cdot tfrac{d-1}{2} bigrrceil + lceil k rceil \[1ex]&=; 2d – 2 + k ;geqslant; 6 – 2 + 1 ;=; 5. end{aligned} (N.B. There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.) As an alternative proof, I could imagine (but haven’t tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used Quantum MacWilliams Identities Peter Shor, Raymond Laflamme https://arxiv.org/abs/quant-ph/9610040  to show that no degenerate code on five qubits exists that can correct any single errors. • Excellent reference, thanks! I didn’t know the Knill–Laflamme paper well enough to know that the lower bound of 5 was there as well. – Niel de Beaudrap 12 hours ago • Thanks for editing! About the lower bound, it seems they don’t address that five qubits are needed, but only that such code must necessarily be non-degenerate. – Felix Huber 12 hours ago • As a side not, from the quantum Singleton bound alson=5$follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as$dleq n/2+1$already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill’s lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf – Felix Huber 12 hours ago • Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for$r leqslant 4; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes. – Niel de Beaudrap 12 hours ago • Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper (“a theory of QECC..), as they point out, general. – Felix Huber 12 hours ago I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of A Theory of Quantum Error-Correcting Codes Emanuel Knill, Raymond Laflamme https://arxiv.org/abs/quant-ph/9604034  where the specific result is: Theorem 5.1. A $$(2^r,k)$$ $$e$$-error-correcting quantum code must satisfy $$r geqslant 4e + lceil log k rceil$$. Here, an $$(N,K)$$ code is an embedding of a $$K$$-dimensional subspace into an $$N$$-dimensional system; it is an $$e$$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $$e$$. In particular, a $$(2^n, 2^k)$$ $$e$$-error-correcting code is what we would now describe as an $$[![n,k,2e:!{+}1]!]$$ code. Theorem 5.1 then allows us to prove that for $$k geqslant 1$$ and $$d geqslant 3$$, an $$[![n,k,d]!]$$ code must satisfy begin{aligned} n ;⩾ 4bigllceiltfrac{d-1}{2}bigrrceil + lceil log 2^k rceil \[1ex]⩾ bigllceil 4 cdot tfrac{d-1}{2} bigrrceil + lceil k rceil \[1ex]&=; 2d – 2 + k ;geqslant; 6 – 2 + 1 ;=; 5. end{aligned} (N.B. There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.) As an alternative proof, I could imagine (but haven’t tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used Quantum MacWilliams Identities Peter Shor, Raymond Laflamme https://arxiv.org/abs/quant-ph/9610040  to show that no degenerate code on five qubits exists that can correct any single errors. • Excellent reference, thanks! I didn’t know the Knill–Laflamme paper well enough to know that the lower bound of 5 was there as well. – Niel de Beaudrap 12 hours ago • Thanks for editing! About the lower bound, it seems they don’t address that five qubits are needed, but only that such code must necessarily be non-degenerate. – Felix Huber 12 hours ago • As a side not, from the quantum Singleton bound alson=5$follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as$dleq n/2+1$already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill’s lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf – Felix Huber 12 hours ago • Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for$r leqslant 4; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes. – Niel de Beaudrap 12 hours ago • Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper (“a theory of QECC..), as they point out, general. – Felix Huber 12 hours ago I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of A Theory of Quantum Error-Correcting Codes Emanuel Knill, Raymond Laflamme https://arxiv.org/abs/quant-ph/9604034  where the specific result is: Theorem 5.1. A $$(2^r,k)$$ $$e$$-error-correcting quantum code must satisfy $$r geqslant 4e + lceil log k rceil$$. Here, an $$(N,K)$$ code is an embedding of a $$K$$-dimensional subspace into an $$N$$-dimensional system; it is an $$e$$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $$e$$. In particular, a $$(2^n, 2^k)$$ $$e$$-error-correcting code is what we would now describe as an $$[![n,k,2e:!{+}1]!]$$ code. Theorem 5.1 then allows us to prove that for $$k geqslant 1$$ and $$d geqslant 3$$, an $$[![n,k,d]!]$$ code must satisfy begin{aligned} n ;⩾ 4bigllceiltfrac{d-1}{2}bigrrceil + lceil log 2^k rceil \[1ex]⩾ bigllceil 4 cdot tfrac{d-1}{2} bigrrceil + lceil k rceil \[1ex]&=; 2d – 2 + k ;geqslant; 6 – 2 + 1 ;=; 5. end{aligned} (N.B. There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.) As an alternative proof, I could imagine (but haven’t tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used Quantum MacWilliams Identities Peter Shor, Raymond Laflamme https://arxiv.org/abs/quant-ph/9610040  to show that no degenerate code on five qubits exists that can correct any single errors. I wanted to add a short comment to the earliest reference. I believe this was shown already a bit earlier in Section 5.2 of A Theory of Quantum Error-Correcting Codes Emanuel Knill, Raymond Laflamme https://arxiv.org/abs/quant-ph/9604034  where the specific result is: Theorem 5.1. A $$(2^r,k)$$ $$e$$-error-correcting quantum code must satisfy $$r geqslant 4e + lceil log k rceil$$. Here, an $$(N,K)$$ code is an embedding of a $$K$$-dimensional subspace into an $$N$$-dimensional system; it is an $$e$$-error-correcting code if the system decomposes as a tensor product of qubits, and the code is capable of correcting errors of weight $$e$$. In particular, a $$(2^n, 2^k)$$ $$e$$-error-correcting code is what we would now describe as an $$[![n,k,2e:!{+}1]!]$$ code. Theorem 5.1 then allows us to prove that for $$k geqslant 1$$ and $$d geqslant 3$$, an $$[![n,k,d]!]$$ code must satisfy begin{aligned} n ;⩾ 4bigllceiltfrac{d-1}{2}bigrrceil + lceil log 2^k rceil \[1ex]⩾ bigllceil 4 cdot tfrac{d-1}{2} bigrrceil + lceil k rceil \[1ex]&=; 2d – 2 + k ;geqslant; 6 – 2 + 1 ;=; 5. end{aligned} (N.B. There is a peculiarity with the dates here: the arxiv submission of above paper is April 1996, a couple of months earlier than Grassl, Beth, and Pellizzari paper submitted in Oct 1996. However, the date below the title in the pdf states a year earlier, April 1995.) As an alternative proof, I could imagine (but haven’t tested yet) that simply solving for a weight distribution that satisfies the Mac-Williams Identities should also suffice. Such a strategy is indeed used Quantum MacWilliams Identities Peter Shor, Raymond Laflamme https://arxiv.org/abs/quant-ph/9610040  to show that no degenerate code on five qubits exists that can correct any single errors. edited 12 hours ago answered 13 hours ago Felix Huber 413 413 • Excellent reference, thanks! I didn’t know the Knill–Laflamme paper well enough to know that the lower bound of 5 was there as well. – Niel de Beaudrap 12 hours ago • Thanks for editing! About the lower bound, it seems they don’t address that five qubits are needed, but only that such code must necessarily be non-degenerate. – Felix Huber 12 hours ago • As a side not, from the quantum Singleton bound alson=5$follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as$dleq n/2+1$already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill’s lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf – Felix Huber 12 hours ago • Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for$r leqslant 4$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes. – Niel de Beaudrap 12 hours ago • Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper (“a theory of QECC..), as they point out, general. – Felix Huber 12 hours ago • Excellent reference, thanks! I didn’t know the Knill–Laflamme paper well enough to know that the lower bound of 5 was there as well. – Niel de Beaudrap 12 hours ago • Thanks for editing! About the lower bound, it seems they don’t address that five qubits are needed, but only that such code must necessarily be non-degenerate. – Felix Huber 12 hours ago • As a side not, from the quantum Singleton bound also$n=5$follows for the smallest code being able to correct any single errors. In this case, no-cloning is not required (as$dleq n/2+1$already), and the bound follows from subadditivity of the von Neumann entropy. C.f. Section 7.8.3 in Preskill’s lecture notes, theory.caltech.edu/people/preskill/ph229/notes/chap7.pdf – Felix Huber 12 hours ago • Unless I badly misread that Section, it seems to me that they show that no error correcting code exists for$r leqslant 4\$; it seems clear that this also follows from Theorem 5.1 as well. None of their terminology suggests that their result is special to non-degenerate codes.
– Niel de Beaudrap
12 hours ago

• Sorry for the confusion. My side-comment was referring to the Quantum MacWilliams identity paper: there it was only shown that a single-error correcting five qubit code must be pure/non-degenerate. Section 5.2 in the Knill-Laflamme paper (“a theory of QECC..), as they point out, general.
– Felix Huber
12 hours ago

Excellent reference, thanks! I didn’t know the Knill–Laflamme paper well enough to know that the lower bound of 5 was there as well.
– Niel de Beaudrap
12 hours ago

Excellent reference, thanks! I didn’t know the Knill–Laflamme paper well enough to know that the lower boun