- This topic has 190 replies, 20 voices, and was last updated 16 years, 4 months ago by Eugene.
-
AuthorPosts
-
December 29, 2007 at 10:48 PM #126546December 30, 2007 at 4:27 AM #126327EugeneParticipant
can you add graphs that chart the rate of change of the particular areas?
any specific areas you want?
also, (maybe you did this already? or case-shiller did?) apply filters that weed out 95, 90, and 75 percentile data?
What do you mean?
Homes like mine were around 500K in late 2000. They hit a peak of around 950K. Realistically, it is probably somewhere between 825 and 850K today which puts it 70% above its 2000 price and down around 13% from the peak
My model says that a house worth 500K in late 2000 hit a peak of around 1M in 2005-2006 and it’s still worth around 950K. So the issue is really that your 13% decline from the peak is not reflected in statistical data for the area.
It’s not reflected because of transactions like these
http://www.sdlookup.com/Property-7798B5DD-8152_Calle_Catalonia_Carlsbad_CA_92009
$1.45m in 10/2005, $1.54m in 11/2007http://www.sdlookup.com/Property-B8C49119-7567_Circulo_Sequoia_Carlsbad_CA_92009
$1.01m in 4/2005, $1.01m in 11/2007http://www.sdlookup.com/Property-AAE222CA-401_Swamis_Ln_Encinitas_CA_92024
$629k in 3/2005, $693k in 11/2007Maybe your specific neighborhood is different, but it seems that some houses in 92024 and 92009 do sell for 2005 prices and above.
There is just too much noise in the data. I dont trust any data points.
Every single data point is suspicious in the same way. If you have lots of points, underlying trends will start to show up behind the noise.
In the chart Normal Heights is lumped with Mission Valley. These two zips have very little in common
Good observation. I’m actually aware of that. I was trying to cover all zip codes of Greater San Diego. For 92108 I only had a total of 11 resale pairs (it’s mostly a condo area) and it didn’t naturally fit with any of its neighbors. Coronado is in a similar situation. It’s different from OB and Point Loma, and it’s too small to estimate its rate of decline with reasonable precision.
Most areas in the chart have at least 40-50 resale pairs in each half-year period, enough to get the rate of decline down to within a few per cent.
December 30, 2007 at 4:27 AM #126486EugeneParticipantcan you add graphs that chart the rate of change of the particular areas?
any specific areas you want?
also, (maybe you did this already? or case-shiller did?) apply filters that weed out 95, 90, and 75 percentile data?
What do you mean?
Homes like mine were around 500K in late 2000. They hit a peak of around 950K. Realistically, it is probably somewhere between 825 and 850K today which puts it 70% above its 2000 price and down around 13% from the peak
My model says that a house worth 500K in late 2000 hit a peak of around 1M in 2005-2006 and it’s still worth around 950K. So the issue is really that your 13% decline from the peak is not reflected in statistical data for the area.
It’s not reflected because of transactions like these
http://www.sdlookup.com/Property-7798B5DD-8152_Calle_Catalonia_Carlsbad_CA_92009
$1.45m in 10/2005, $1.54m in 11/2007http://www.sdlookup.com/Property-B8C49119-7567_Circulo_Sequoia_Carlsbad_CA_92009
$1.01m in 4/2005, $1.01m in 11/2007http://www.sdlookup.com/Property-AAE222CA-401_Swamis_Ln_Encinitas_CA_92024
$629k in 3/2005, $693k in 11/2007Maybe your specific neighborhood is different, but it seems that some houses in 92024 and 92009 do sell for 2005 prices and above.
There is just too much noise in the data. I dont trust any data points.
Every single data point is suspicious in the same way. If you have lots of points, underlying trends will start to show up behind the noise.
In the chart Normal Heights is lumped with Mission Valley. These two zips have very little in common
Good observation. I’m actually aware of that. I was trying to cover all zip codes of Greater San Diego. For 92108 I only had a total of 11 resale pairs (it’s mostly a condo area) and it didn’t naturally fit with any of its neighbors. Coronado is in a similar situation. It’s different from OB and Point Loma, and it’s too small to estimate its rate of decline with reasonable precision.
Most areas in the chart have at least 40-50 resale pairs in each half-year period, enough to get the rate of decline down to within a few per cent.
December 30, 2007 at 4:27 AM #126497EugeneParticipantcan you add graphs that chart the rate of change of the particular areas?
any specific areas you want?
also, (maybe you did this already? or case-shiller did?) apply filters that weed out 95, 90, and 75 percentile data?
What do you mean?
Homes like mine were around 500K in late 2000. They hit a peak of around 950K. Realistically, it is probably somewhere between 825 and 850K today which puts it 70% above its 2000 price and down around 13% from the peak
My model says that a house worth 500K in late 2000 hit a peak of around 1M in 2005-2006 and it’s still worth around 950K. So the issue is really that your 13% decline from the peak is not reflected in statistical data for the area.
It’s not reflected because of transactions like these
http://www.sdlookup.com/Property-7798B5DD-8152_Calle_Catalonia_Carlsbad_CA_92009
$1.45m in 10/2005, $1.54m in 11/2007http://www.sdlookup.com/Property-B8C49119-7567_Circulo_Sequoia_Carlsbad_CA_92009
$1.01m in 4/2005, $1.01m in 11/2007http://www.sdlookup.com/Property-AAE222CA-401_Swamis_Ln_Encinitas_CA_92024
$629k in 3/2005, $693k in 11/2007Maybe your specific neighborhood is different, but it seems that some houses in 92024 and 92009 do sell for 2005 prices and above.
There is just too much noise in the data. I dont trust any data points.
Every single data point is suspicious in the same way. If you have lots of points, underlying trends will start to show up behind the noise.
In the chart Normal Heights is lumped with Mission Valley. These two zips have very little in common
Good observation. I’m actually aware of that. I was trying to cover all zip codes of Greater San Diego. For 92108 I only had a total of 11 resale pairs (it’s mostly a condo area) and it didn’t naturally fit with any of its neighbors. Coronado is in a similar situation. It’s different from OB and Point Loma, and it’s too small to estimate its rate of decline with reasonable precision.
Most areas in the chart have at least 40-50 resale pairs in each half-year period, enough to get the rate of decline down to within a few per cent.
December 30, 2007 at 4:27 AM #126563EugeneParticipantcan you add graphs that chart the rate of change of the particular areas?
any specific areas you want?
also, (maybe you did this already? or case-shiller did?) apply filters that weed out 95, 90, and 75 percentile data?
What do you mean?
Homes like mine were around 500K in late 2000. They hit a peak of around 950K. Realistically, it is probably somewhere between 825 and 850K today which puts it 70% above its 2000 price and down around 13% from the peak
My model says that a house worth 500K in late 2000 hit a peak of around 1M in 2005-2006 and it’s still worth around 950K. So the issue is really that your 13% decline from the peak is not reflected in statistical data for the area.
It’s not reflected because of transactions like these
http://www.sdlookup.com/Property-7798B5DD-8152_Calle_Catalonia_Carlsbad_CA_92009
$1.45m in 10/2005, $1.54m in 11/2007http://www.sdlookup.com/Property-B8C49119-7567_Circulo_Sequoia_Carlsbad_CA_92009
$1.01m in 4/2005, $1.01m in 11/2007http://www.sdlookup.com/Property-AAE222CA-401_Swamis_Ln_Encinitas_CA_92024
$629k in 3/2005, $693k in 11/2007Maybe your specific neighborhood is different, but it seems that some houses in 92024 and 92009 do sell for 2005 prices and above.
There is just too much noise in the data. I dont trust any data points.
Every single data point is suspicious in the same way. If you have lots of points, underlying trends will start to show up behind the noise.
In the chart Normal Heights is lumped with Mission Valley. These two zips have very little in common
Good observation. I’m actually aware of that. I was trying to cover all zip codes of Greater San Diego. For 92108 I only had a total of 11 resale pairs (it’s mostly a condo area) and it didn’t naturally fit with any of its neighbors. Coronado is in a similar situation. It’s different from OB and Point Loma, and it’s too small to estimate its rate of decline with reasonable precision.
Most areas in the chart have at least 40-50 resale pairs in each half-year period, enough to get the rate of decline down to within a few per cent.
December 30, 2007 at 4:27 AM #126590EugeneParticipantcan you add graphs that chart the rate of change of the particular areas?
any specific areas you want?
also, (maybe you did this already? or case-shiller did?) apply filters that weed out 95, 90, and 75 percentile data?
What do you mean?
Homes like mine were around 500K in late 2000. They hit a peak of around 950K. Realistically, it is probably somewhere between 825 and 850K today which puts it 70% above its 2000 price and down around 13% from the peak
My model says that a house worth 500K in late 2000 hit a peak of around 1M in 2005-2006 and it’s still worth around 950K. So the issue is really that your 13% decline from the peak is not reflected in statistical data for the area.
It’s not reflected because of transactions like these
http://www.sdlookup.com/Property-7798B5DD-8152_Calle_Catalonia_Carlsbad_CA_92009
$1.45m in 10/2005, $1.54m in 11/2007http://www.sdlookup.com/Property-B8C49119-7567_Circulo_Sequoia_Carlsbad_CA_92009
$1.01m in 4/2005, $1.01m in 11/2007http://www.sdlookup.com/Property-AAE222CA-401_Swamis_Ln_Encinitas_CA_92024
$629k in 3/2005, $693k in 11/2007Maybe your specific neighborhood is different, but it seems that some houses in 92024 and 92009 do sell for 2005 prices and above.
There is just too much noise in the data. I dont trust any data points.
Every single data point is suspicious in the same way. If you have lots of points, underlying trends will start to show up behind the noise.
In the chart Normal Heights is lumped with Mission Valley. These two zips have very little in common
Good observation. I’m actually aware of that. I was trying to cover all zip codes of Greater San Diego. For 92108 I only had a total of 11 resale pairs (it’s mostly a condo area) and it didn’t naturally fit with any of its neighbors. Coronado is in a similar situation. It’s different from OB and Point Loma, and it’s too small to estimate its rate of decline with reasonable precision.
Most areas in the chart have at least 40-50 resale pairs in each half-year period, enough to get the rate of decline down to within a few per cent.
December 30, 2007 at 8:47 AM #126362sdrealtorParticipantThe difference is you are using a model and I am basing my stats on real street level market information. My house was never worth $1m and the 950K would have required a lucky sale at the absolute peak. My neighborhood is very representative of the overall market and if anything has been stronger in the decline.
The problem with using stats looking down from cyberspace is that you use examples like Circuola Sequoia and Swami’s Lane which were new purchases. Both required landscaping, window treatments and assorted other improvements to make them liveable which could (and did) easily add 10% to the purchase price you used. The Swami’s house was in a new tract close to the beach which had over 1000 people trying to buy about 30 homes. More than half of them went to friends and family of the builder. If they were sold on the open market the prices would have been much higher. I was on the list to buy one myself. These are examples of the kind of noise present in the data.
I think what you tried to do is great. It was a valiant effort but you are trying to do something which quite simply cant be done with any degree of accuracy. Sure trends emerge (prices increased from 2000 to 2005/6 and now they are falling….DUH!) but they can be observed equally with common sense. What has happened and what really is happening cant be accurately determined from cyberspace.
December 30, 2007 at 8:47 AM #126520sdrealtorParticipantThe difference is you are using a model and I am basing my stats on real street level market information. My house was never worth $1m and the 950K would have required a lucky sale at the absolute peak. My neighborhood is very representative of the overall market and if anything has been stronger in the decline.
The problem with using stats looking down from cyberspace is that you use examples like Circuola Sequoia and Swami’s Lane which were new purchases. Both required landscaping, window treatments and assorted other improvements to make them liveable which could (and did) easily add 10% to the purchase price you used. The Swami’s house was in a new tract close to the beach which had over 1000 people trying to buy about 30 homes. More than half of them went to friends and family of the builder. If they were sold on the open market the prices would have been much higher. I was on the list to buy one myself. These are examples of the kind of noise present in the data.
I think what you tried to do is great. It was a valiant effort but you are trying to do something which quite simply cant be done with any degree of accuracy. Sure trends emerge (prices increased from 2000 to 2005/6 and now they are falling….DUH!) but they can be observed equally with common sense. What has happened and what really is happening cant be accurately determined from cyberspace.
December 30, 2007 at 8:47 AM #126532sdrealtorParticipantThe difference is you are using a model and I am basing my stats on real street level market information. My house was never worth $1m and the 950K would have required a lucky sale at the absolute peak. My neighborhood is very representative of the overall market and if anything has been stronger in the decline.
The problem with using stats looking down from cyberspace is that you use examples like Circuola Sequoia and Swami’s Lane which were new purchases. Both required landscaping, window treatments and assorted other improvements to make them liveable which could (and did) easily add 10% to the purchase price you used. The Swami’s house was in a new tract close to the beach which had over 1000 people trying to buy about 30 homes. More than half of them went to friends and family of the builder. If they were sold on the open market the prices would have been much higher. I was on the list to buy one myself. These are examples of the kind of noise present in the data.
I think what you tried to do is great. It was a valiant effort but you are trying to do something which quite simply cant be done with any degree of accuracy. Sure trends emerge (prices increased from 2000 to 2005/6 and now they are falling….DUH!) but they can be observed equally with common sense. What has happened and what really is happening cant be accurately determined from cyberspace.
December 30, 2007 at 8:47 AM #126599sdrealtorParticipantThe difference is you are using a model and I am basing my stats on real street level market information. My house was never worth $1m and the 950K would have required a lucky sale at the absolute peak. My neighborhood is very representative of the overall market and if anything has been stronger in the decline.
The problem with using stats looking down from cyberspace is that you use examples like Circuola Sequoia and Swami’s Lane which were new purchases. Both required landscaping, window treatments and assorted other improvements to make them liveable which could (and did) easily add 10% to the purchase price you used. The Swami’s house was in a new tract close to the beach which had over 1000 people trying to buy about 30 homes. More than half of them went to friends and family of the builder. If they were sold on the open market the prices would have been much higher. I was on the list to buy one myself. These are examples of the kind of noise present in the data.
I think what you tried to do is great. It was a valiant effort but you are trying to do something which quite simply cant be done with any degree of accuracy. Sure trends emerge (prices increased from 2000 to 2005/6 and now they are falling….DUH!) but they can be observed equally with common sense. What has happened and what really is happening cant be accurately determined from cyberspace.
December 30, 2007 at 8:47 AM #126625sdrealtorParticipantThe difference is you are using a model and I am basing my stats on real street level market information. My house was never worth $1m and the 950K would have required a lucky sale at the absolute peak. My neighborhood is very representative of the overall market and if anything has been stronger in the decline.
The problem with using stats looking down from cyberspace is that you use examples like Circuola Sequoia and Swami’s Lane which were new purchases. Both required landscaping, window treatments and assorted other improvements to make them liveable which could (and did) easily add 10% to the purchase price you used. The Swami’s house was in a new tract close to the beach which had over 1000 people trying to buy about 30 homes. More than half of them went to friends and family of the builder. If they were sold on the open market the prices would have been much higher. I was on the list to buy one myself. These are examples of the kind of noise present in the data.
I think what you tried to do is great. It was a valiant effort but you are trying to do something which quite simply cant be done with any degree of accuracy. Sure trends emerge (prices increased from 2000 to 2005/6 and now they are falling….DUH!) but they can be observed equally with common sense. What has happened and what really is happening cant be accurately determined from cyberspace.
December 30, 2007 at 10:39 AM #126417drunkleParticipantesmith:
can you add graphs that chart the rate of change of the particular areas?any specific areas you want?
also, (maybe you did this already? or case-shiller did?) apply filters that weed out 95, 90, and 75 percentile data?
What do you mean?
no preference for area, figure you could easily do it for all your existing classifications…
filter out the 5th/95th percentile data, 10th/90th, 25th/75th… as in, get rid of the data that is outside of the percent range in the distribution…
i don’t recall the exact method of doing so and i dont even recall the proper term. but essentially, lop off the top and bottom set of data that is less than the bottom 5% and greater than the top 95% of the data in the distribution. so for example:
dataset:
15
15
15
45
45
55
65
65
75
95median = 50, 10% percentile = 17, 90% = 93. eliminate values that fall outside of the percentile range and then recalculate median and plot. for this dataset, median becomes 60…
i’m wondering if doing such would get rid of aberrant values (prices) and affect the median, showing a more accurate picture… or maybe doing such would really only be useful with calculating mean… or maybe it’s just a waste of time as the values dont change much…
December 30, 2007 at 10:39 AM #126576drunkleParticipantesmith:
can you add graphs that chart the rate of change of the particular areas?any specific areas you want?
also, (maybe you did this already? or case-shiller did?) apply filters that weed out 95, 90, and 75 percentile data?
What do you mean?
no preference for area, figure you could easily do it for all your existing classifications…
filter out the 5th/95th percentile data, 10th/90th, 25th/75th… as in, get rid of the data that is outside of the percent range in the distribution…
i don’t recall the exact method of doing so and i dont even recall the proper term. but essentially, lop off the top and bottom set of data that is less than the bottom 5% and greater than the top 95% of the data in the distribution. so for example:
dataset:
15
15
15
45
45
55
65
65
75
95median = 50, 10% percentile = 17, 90% = 93. eliminate values that fall outside of the percentile range and then recalculate median and plot. for this dataset, median becomes 60…
i’m wondering if doing such would get rid of aberrant values (prices) and affect the median, showing a more accurate picture… or maybe doing such would really only be useful with calculating mean… or maybe it’s just a waste of time as the values dont change much…
December 30, 2007 at 10:39 AM #126587drunkleParticipantesmith:
can you add graphs that chart the rate of change of the particular areas?any specific areas you want?
also, (maybe you did this already? or case-shiller did?) apply filters that weed out 95, 90, and 75 percentile data?
What do you mean?
no preference for area, figure you could easily do it for all your existing classifications…
filter out the 5th/95th percentile data, 10th/90th, 25th/75th… as in, get rid of the data that is outside of the percent range in the distribution…
i don’t recall the exact method of doing so and i dont even recall the proper term. but essentially, lop off the top and bottom set of data that is less than the bottom 5% and greater than the top 95% of the data in the distribution. so for example:
dataset:
15
15
15
45
45
55
65
65
75
95median = 50, 10% percentile = 17, 90% = 93. eliminate values that fall outside of the percentile range and then recalculate median and plot. for this dataset, median becomes 60…
i’m wondering if doing such would get rid of aberrant values (prices) and affect the median, showing a more accurate picture… or maybe doing such would really only be useful with calculating mean… or maybe it’s just a waste of time as the values dont change much…
December 30, 2007 at 10:39 AM #126654drunkleParticipantesmith:
can you add graphs that chart the rate of change of the particular areas?any specific areas you want?
also, (maybe you did this already? or case-shiller did?) apply filters that weed out 95, 90, and 75 percentile data?
What do you mean?
no preference for area, figure you could easily do it for all your existing classifications…
filter out the 5th/95th percentile data, 10th/90th, 25th/75th… as in, get rid of the data that is outside of the percent range in the distribution…
i don’t recall the exact method of doing so and i dont even recall the proper term. but essentially, lop off the top and bottom set of data that is less than the bottom 5% and greater than the top 95% of the data in the distribution. so for example:
dataset:
15
15
15
45
45
55
65
65
75
95median = 50, 10% percentile = 17, 90% = 93. eliminate values that fall outside of the percentile range and then recalculate median and plot. for this dataset, median becomes 60…
i’m wondering if doing such would get rid of aberrant values (prices) and affect the median, showing a more accurate picture… or maybe doing such would really only be useful with calculating mean… or maybe it’s just a waste of time as the values dont change much…
-
AuthorPosts
- You must be logged in to reply to this topic.