I am copying some of the functionality from this dialog in the S3 Organizer. I don’t think I will copy these instructions:
{6230289B-5BEE-409e-932A-2F01FA407A92}
I am copying some of the functionality from this dialog in the S3 Organizer. I don’t think I will copy these instructions:
{6230289B-5BEE-409e-932A-2F01FA407A92}
I did a Service Reference in my .NET C# assembly and got the infrastructure for a fully enabled WCF proxy. I also got a app.config file added to my project that I didn’t want. I need to run configuration free since I am running in both COM+ and managed mode (i.e. some public classes are COMVisible and some are Managed), plus my COMVisible classes are hosted in COM+. So I need to get ride of the app.config which looks like this:
<?xml version="1.0" encoding="utf-8" ?><configuration><system.serviceModel><bindings><basicHttpBinding><binding name="AmazonS3SoapBinding" closeTimeout="00:01:00"openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00"allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard"maxBufferSize="65536" maxBufferPoolSize="524288" maxReceivedMessageSize="65536"messageEncoding="Text" textEncoding="utf-8" transferMode="Buffered"useDefaultWebProxy="true"><readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384"maxBytesPerRead="4096" maxNameTableCharCount="16384" /><security mode="None"><transport clientCredentialType="None" proxyCredentialType="None"realm="" /><message clientCredentialType="UserName" algorithmSuite="Default" /></security></binding></basicHttpBinding></bindings><client><endpoint address="https://s3.amazonaws.com/soap" binding="basicHttpBinding"bindingConfiguration="AmazonS3SoapBinding" contract="AmazonS3.AmazonS3"name="AmazonS3" /></client></system.serviceModel></configuration>
I replaced it with a method like this:
private AmazonS3.AmazonS3 CreateChannel()
{BasicHttpBinding wsHttpBinding = new BasicHttpBinding();
wsHttpBinding.Name = "dynamicBinding";
wsHttpBinding.Security.Mode = BasicHttpSecurityMode.Transport;wsHttpBinding.TextEncoding = Encoding.UTF8;wsHttpBinding.HostNameComparisonMode = HostNameComparisonMode.StrongWildcard;wsHttpBinding.AllowCookies = false;
wsHttpBinding.MessageEncoding = WSMessageEncoding.Text;wsHttpBinding.UseDefaultWebProxy = true;
wsHttpBinding.OpenTimeout = new TimeSpan(0, 1, 0);
wsHttpBinding.ReceiveTimeout = new TimeSpan(0, 10, 0);
wsHttpBinding.MaxBufferPoolSize = 524288;wsHttpBinding.MaxReceivedMessageSize = 65536;EndpointAddress endpointAddress = new EndpointAddress(AmazonUrl);
ChannelFactory<AmazonS3.AmazonS3> channelFactory = new ChannelFactory<AmazonS3.AmazonS3>(
wsHttpBinding, endpointAddress);return (channelFactory.CreateChannel());
}
I am not a WCF expert, however this code works and not I can delete the app.config.
{6230289B-5BEE-409e-932A-2F01FA407A92}
0x80070002 is the error code returned when a FileNotFoundException (HRESULT COR_E_FILENOTFOUND) is thrown in managed code. Basically, there is a missing file that your managed code expects.
In my case I was getting this when I tried to install a component in COM+ (Component Services) where the COM object was written in C#. However, I didn’t have a method defined as with a [ComRegisterFunctionAttribute], so how could my code throw a file not found exception? The issue was that my assembly relied on other assemblies that where not present. Which lead to a binding problem when install the component in COM+, so it was the CLR that was throwing the exception not my code.
To make matter’s worse, I was trying to install the component during an MSI installation. The .log file from msi just showed an 0x80070002 error. To find the issue I used the Assembly Bind Log Viewer (FUSLOGVW.exe) found in the Windows SDK. While the MSI installer ran, The Assembly Bind Viewer was tracking which assemblies where not binding correctly, and keeping a log of them. This allowed me to figure out what MSI was doing, and where the 0x80070002 was coming from.
{6230289B-5BEE-409e-932A-2F01FA407A92}
When managed code with a COM interface throws a SecurityException the return value (HRESULT) is 0x80131501. This can happen on a regsvr32.exe (or regasm.exe) or on any method call to the COM interface.
{6230289B-5BEE-409e-932A-2F01FA407A92}
I run a travel and tourist site for San Juan Island Kulshan.com. A long time ago we wrote up reviews for all the islands, the major towns, and the high tourism attractions. The site is structured this way:
We also create a Google Site map giving the home page has the highest priority, the islands the next highest, the cities a lower priority and the individual write-ups the lowest priority. All pretty standard. However, when we started to look at the reports from Google Webmaster Tools we saw something very disturbing:
We were telling Google about 220 pages, however only 93 had been indexed. Why was this? We presumed it was because there where no external links to the leaf pages (the individual reviews of restaurants, parks, beaches, and trails). Google crawler was doing exactly what we asked, traverse the high priority pages in the site map and leave the low priority pages. The leaf pages (the reviews) that were linked externally where being indexed, however Google was assuming the rest where not worth the effort (or room to index). Since the majority of our traffic was directly coming to the reviews from Google search, we need to figure out a way to get all the reviews listed in Google index.
What we decide to do was blog about all the new additions to the web site, when we posted a new review, then we wanted to tell the world about it, blog it. Basically the web site itself is a reference site, structured in a tree. However, what we needed was a news sites, that told people what was going on with the structure. So we create Kulshan.com Blog (Kulshan.com Blog). This was created on Blogger (blogspot.com).
It took Google 5 days to traverse the blog (we link to it from the bottom of every page on Kulshan.com). The this is what Google crawler found: Link Results For Kulshan.com Blog. Once Google crawler found the page, it took another 24 hours for the link to the main site to appear in Google’s search results. Success.
Currently we are avoiding the temptation to add all the reviews to the blog that are missing from Google search. Just doing the newly added reviews to the web site.
{6230289B-5BEE-409e-932A-2F01FA407A92}
The size of Elastic Block Storage (EBS) on Amazon EC2 is charge by the size of the formatted drive, not the size of data on the drive. In the case of the Windows-Server2008-i386-Base-v101 this is a 30 Gigabytes (the primary hard drive). Which you pay for regardless if you are using all of it or not.
This is different the the dynamically expanding .vhds that are used in Windows Virtual Server.
EBS storage is allocated all at once to avoid disk fragmentation and poor storage.
{6230289B-5BEE-409e-932A-2F01FA407A92}
What is Elastic Block Storage (EBS)? When the Amazon web site describes EBS there is a lot of references about mounting and raw block storage -- Linux terms. However, with the new boot from EBS for Windows this is a confusing. So from a Windows perspective I am going to try to explain EBS and touch on why boot from EBS is the only way to go for Windows.
Windows doesn't really have a concept of mounting hard drives, and the operating system likes all physical drives to be in place before it boots. Which means that when Amazon EC2 boots your Windows instance it needs to know what storage (drives) are attached. Keep this in mind as you read on.
In Microsoft Virtual server we think about this as a .vhd drive. All the virtual hard drives are attached when the Virtual instance is started.
Back to Amazon EC2, there are two types of storage (drives) available. ESB drives and Amazon Machine Images (AMI). Amazon Machine Images are stored on the Amazon S3 cloud, and moved to the Amazon EC2 servers when the EC2 instance is booted. Previous to December 3rd 2009, you could only boot off an AMI. Here are the steps for creating a Amazon EC2 instance that booted off an AMI:
1) Choose a operating system pre-installed on a AMI from the list.
2) Boot your Amazon EC2 instance and wait for the System log to tell you that Windows is ready.
3) Log in via RDC and do some stuff.
4) Shutdown your Amazon EC2 instance
5) Save that AMI that was running on Amazon EC2 datacenter to Amazon S3, which is a slow process.
6) Register the AMI instance with Amazon EC2, so that EC2 knows where in S3 to fetch the image from.
6) Terminate the instance running on Amazon EC2, which removes the storage from the EC2 data center.
When you wanted to restart that AMI:
1) Choose your AMI image to boot from the registered AMI list.
2) Your image is copied from Amazon S3 to The EC2 data center, this tends to be very slow -- up to 20 minutes.
3) The image is booted and now you can log in.
4) Once you are done using EC2 instance, you have to repeat the steps to save it off to the Amazon S3 cloud.
AMI instances where impractical for two reasons: they where slow to transfer between the EC2 datacenter and Amazon S3, which meant saving them and retrieve them was a tremendous act of patience. The second reason is that they where limited to 10 gigabytes. Which is enough to run a web server, however using it for almost anything else was impractical.
Before December 3rd, 2009, Amazon solution to the problems was the allocation of EBS for a secondary drive from the main drive booting AMI image. EBS is permanent storage of from 1 gigabyte to 1 terabyte in the Amazon EC2 datacenter. This second drive (think d: drive) didn’t need to be copied from Amazon S3 so it was available instantly when the operating system booted and it could be much bigger then the 10 Gigabyte limit of the AMI boot image. However, this didn’t work for Windows operating system. Most of the strain on the 10 gigabyte main (c:) drive was from Windows updates and program installation, which typically require that they be installed on the c: drive (under Program Files). Secondly, as pointed out above windows needs to know the permanent drives before it boots, and there are no device drivers for EBS drives. Which left Windows operators doing some pretty convoluted stuff to figure out how to get a little more storage from EBS drives. All in all it wasn’t worth the effort.
“On December 3rd Amazon announced that you could boot from Elastic Block Storage (EBS)”
What this meant is that Amazon provided a few pre-installed images of the Windows operating system (and Linux) that where on EBS. Booting from these worked like this:
1) Choose a operating system pre-installed on a EBS from the list.
2) Boot your Amazon EC2 instance and wait for the System log to tell you that Windows is ready.
3) Log in via RDC and do some stuff.
4) Shutdown your Amazon EC2 instance -- do not terminate.
That is it. Your personal EBS (a copy of the pre-installed EBS) stayed in the EC2 data center. When shutdown the instance is not charge the CPU fee, just the storage fee. An when you start it the next time it starts much faster since it doesn’t need to be copied from S3.
However, the pre-installed EBS image you choose has a fixed hard disk size. I am running the medium instance of the Windows 2003 basic, it was installed on a 30 Gigabyte EBS, which is plenty for what I am doing. Basically, Amazon had to make a choice about the EBS size before they installed the operating system. Coming from the Virtual Server 2005 world I find myself a little spoiled. In Virtual Server you could have dynamically expanding drives that increased their storage (up to 256 Gigabytes) as you used the storage (wrote more to the disk). In the EBS world you have a fixed pre-allocated size of the hard disk.
Booting from EBS makes running the Windows operating system practical on Amazon EC2. It is my recommendation that you always choose a EBS image to boot from.
{6230289B-5BEE-409e-932A-2F01FA407A92}
There is no such thing as a shortcut to a shortcut in the Windows File System. If you create a shortcut from a shortcut then you just get a shortcut to the original target. You can programmatically create a shortcut to a shortcut using CLSID_ShellLink, however Windows will not render the icon correctly, or navigate the end target correctly.
{6230289B-5BEE-409e-932A-2F01FA407A92}
You can not call IShellLink method SetPath with a full parsable path to a namespace extension in Windows 7. Instead you have to call IShellLink SetIDList. Once you have set the id list and save the .lnk file via IPersistFile Save method the shortcut will find your namespace extension (on open). However, CLSID_ShellLink is unable to render the Image via IExtraceImage for the link if the link is to an image (or other file type that supports IExtractImage). Instead it renders the default icon for your perceived type. It asks your namespace extension for the perceived type via IShellFolder2::GetDetailsOf.
{6230289B-5BEE-409e-932A-2F01FA407A92}
My definition of a build server is a standalone box that has been configured so that it can build a release of your software with one command. The server typically has compilers, install packing software, and is dedicated to the task of building a release. Questions from the Joel Test that a build server needs to answer in the affirmative:
Build in one step?
Daily builds?
My build scripts, that run on the build server, perform these tasks:
Over the years, I built different types of build servers, and written many build scripts. Ten years ago, I had dedicated build servers. Lately, I use virtual machines hosted on top of Virtual Server 2005. Since I am moving everything to the cloud, I wanted to use Amazon EC2 for Windows as my build server.
One reason I choose Amazon EC2 is that my source control hosting company is unfuddle.com and they use Amazon EC2 and S3. Because they are on S3, data transfer between my EC2 server and S3 is free and is very fast. The installations in step 5 are going back to my S3 account (for download by my users) so those are also free of transfer charges. However, regardless of where you are hosting your source control,
“If your source control is externally hosted on the Internet, an EC2 build server is the next step.”
I tried to do this several months ago, however the 10 Gigabyte limitation on the Amazon Machine Image (AMI) and the slow boot times of a saved AMI made it impossible. Several days ago Amazon announced that you could create a 30 Gigabyte Elastic Block Store (ESB) instance on EC2. The ESB start and stop much easier then the AMI. This is the feature I needed to make this project work. 10 Gigabytes is enough to run the IIS web server and a web site, but in order to get everything else (Visual Studio installed, my source, and have room to compile), I need that 30 Gigabyte ESB instance. It took me 2 days to create the build server. Most of the time was spent installing software and configuring my build script to run remotely.
Here are the steps:
Creating a Key Pair
Once you create a key pair, you will not have to do it again. Now you need to create an instance.
Launch a Default Instance
Once you launch your instance, you need to wait for it to start. Go back to the instance list in the AWS Management console and look for your booting instance. Keep checking the System Log (right click System Log) until it says Windows is Ready to use and has a password line. It is not ready if it is blank – all black.
Getting You Administrator Password
Once the instance is booted, it has a separated virtualized disk just for you.
“The local Administrator password that is just for your instance.”
This password is not what you see in the System Log. The password in the System Log plus your key pair decrypt into the Administrator password.
If you reset your password the next time you login, that will be the last time you have to do that.
Logging in With Remote Desktop
“All access to a Windows EC2 image is done via remote desktop connection.”
Remote desktop is a utility that you have on your Windows operating system in the Accessories folder. The first thing is figure out the DNS that Amazon has allocated for your instance. It is one of the columns in the instance list, or if you want to copy it into your clipboard:
Change the Password
The next step is to change the password so you don’t have to remember or decrypt the default Amazon password. Once you are logged in, you can do this by clicking on the Windows Security option from the Start menu.
Run Windows Updates
First thing you want to do with every new machine is run Windows updates. There have been many security updates since Windows 2003 R2. This is a good time to mention: when you shutdown your ESB Windows you are not charged for CPU time (however you are charged for the storage of the image). Rebooting your instance (like you have to do when you run Windows Updates) doesn’t make it disappear.
“The only way to lose the ESB image is to Terminate it via the AWS Management console.”
You do not want to terminate unless you want to start again from scratch. Shutting down Windows makes your instance shutdown. So when you are done for the day, just shut Windows down and the ESB instance will be there for you to launch the next day. You are billed by the whole hour, so it doesn’t make sense (with a ten minute boot up time) to shut it down when you go to lunch.
Remember that after each reboot forced on you by Windows update, you need to rerun the Windows updates until it tells you there is no more updates. This is Windows 2003 and it will not do all the updates at once.
Installing Software
First thing I need to create a build machine is an install of Visual Studio 2008. However, there is no CD-ROM drive.
“Everything that you want to install on your EC2 instance needs to be downloaded to it.”
Since I am an MSDN subscriber I can download the Visual Studio 2008 .iso directly from the Microsoft.com web site to the EC2 instance and save it on a file on the hard disk. Downloads are amazingly fast to the EC2 instance. Amazon has a great pipe to the Internet and the ESB image writes fast, much faster than a .vhd in Virtual Server 2005.
Once I have the .iso on the hard disk, I need to mount it to install. I am using a free installation of DAEMON tools lite. If you use the custom install option you can opt out of the installation changing your home page, your default search provider and install a toolbar in Internet explorer. Once you have their software installed and you reboot, you can mount your .iso and it appears like a DVD under your computer in Windows explorer.
The next thing I need to install is InstallShield 2008, which I have on DVD. To do this, I use Amazon S3 Organizer, a plug-in for Firefox and upload the whole DVD to a bucket on Amazon S3 from my desktop computer. Then I install Firefox and Amazon S3 Organizer on the running EC2 instance and download the whole DVD to a directory on the C:\ drive. The only issue is that S3 Organizer won’t transfer zero byte files from S3 to the hard drive and the InstallSheild installation has about 20 files with nothing in them. I solve this problem by taking the error list of zero byte files from the S3 Organizer and creating 20 files by hand on the hard drive – that are empty. Fortunately the install works from the directory without having to be run from a root.
Getting the Source Files
Checking out the source from my build script (a batch file) requires that I have an SVN client installed. This is because Unfuddle.com, my source control repository, supports SVN. I use Silk SVN 1.6 which is free and I can download directly to the EC2 instance. The command I add to the batch file look like this:
svn.exe update --non-interactive
Since I have already checked out the repository to the build root, svn.exe knows what repository to pull the files from. Because Unfuddle.com uses S3, updating the files is amazingly fast.
Moving the Installations Off the Server
Once I have the installations created using the InstallShield product, I need to move them to a location where my users can download them. Since I want to shut down the build machine after I am done building, they need to move off of this machine. I chose to place them in my Amazon S3 bucket, and make them as publically available. Anyone can download them directly from Amazon’s servers. To do this, I needed to write some C# code that copied the files to the right location. I created a C# console application called BuildHelper.exe that took in a number of parameters and created a unique file path and name for each installation that the build script created. The BuildHelper.exe project was added to the list of projects I was building and is compiled every time the build script ran.
The command line in the build script batch file looks like this:
echo "BuildHelper.exe" /publish /filename "bigDrive_S3_%1.%2.%3.%4_0600_x86.exe" /filepath "%INSTALLDIR%\Installation\AmazonS3\AmazonS3\WindowsVista\DiskImages\DISK1\setup.exe" /os "Windows Vista" /platform "x86" /productline "BigDrive" /product "BigDrive for Amazon S3" /major %1 /minor %2 /build %3 /intraday %4
%1 %2 %3 %4 are the version parameters passed into the batch file. Each build is has a different version number. /filepath is the location of the setup.exe file. /filename is the new name in the Amazon S3 bucket. I have one of these lines for each operating system and installation created.
Updating The Version Database
At the end of my build script, I am in a really good spot. The assemblies are compiled. I have a setup.exe and it has been deployed where users can download it. The build script “knows” that everything has happened successfully, the version of the release, and what was released. Now I need it to “tell” my version database this information so that the external web site is updated correctly. However, the build server can not access the version database. The version database is behind my firewall in the company data center, and the EC2 image is out on the Internet and is going to be shut down.
To update the database I added another method to BuildHelper.exe that makes an HTTP REST API call to our external website. The REST API is configured to accept the version number, product name, platform etc… and update the internal version database. The database is updated with a public column bit turned off, so that the release doesn’t appear on the web site until we are ready to deploy it.
Shutting Down
Once the build is completed, I shut down the Windows 2003 Server, which shuts down the EC2 instance and the clock stops running. The whole process takes about 30 minutes.
{6230289B-5BEE-409e-932A-2F01FA407A92}
There is a lot of documentation about running Amazon EC2 and if you are having trouble Amazon will quickly direct you to the forums for questions, however there isn’t really a resource from a Windows centric developer to other Windows gurus. As I 20 year Window veteran, I am going to try to fill that gap with a few blog posts entitled “Amazon EC2 for Windows The Basics”
The second thing I do when install a new windows server is to run Windows Update.
“The second thing that you need to do on your EC2 instance is install the Windows updates.”
The Windows 2003 R2 instances from ESB are recent to the R2 of Windows 2003, however there are a lot of security hot fixes released since R2. I like to have them all applied. It is fairly painless since Amazon amazing download times.
Remember this is Windows 2003, so you need to go back to Windows Update every time it makes you reboot until there are no more updates to install.
{6230289B-5BEE-409e-932A-2F01FA407A92}
There is a lot of documentation about running Amazon EC2 and if you are having trouble Amazon will quickly direct you to the forums for questions, however there isn’t really a resource from a Windows centric developer to other Windows gurus. As I 20 year Window veteran, I am going to try to fill that gap with a few blog posts entitled “Amazon EC2 for Windows The Basics”
Once you have your instance boot, you have retrieved the administrator login from the AWS management console, successfully connected to the server via RDC.
“The first thing to do when you get logged in is change the administrator password”
Amazon provides a good secure password for the administrator account – it is obvious there is a lot of thought put into security. However, it is a pain in the butt to get the password the first time and to log in. So changing the password saves you a lot of time on the next boot of the instance.
If you are using Windows you probably already know that you can’t copy and paste your password into the window's login box. That box doesn’t support copy and paste. When you have a long password like Amazon initializes the instance with it can sometimes be frustrating to get it right. However, there is a work around in RDC to help you copy and paste the password.
1) Open Remote Desktop Connection.
2) Choose Options >>.
3) Then check the “Allow me to Save credentials” checkbox.
4) Click Connect
5) The next dialog which is design to allow you to save your password you can copy and paste into the password line.
This allows you easily to copy/paste from the AWS Management console into the login box making first time remote desktop access faster.
{6230289B-5BEE-409e-932A-2F01FA407A92}
There is a lot of documentation about running Amazon EC2 and if you are having trouble Amazon will quickly direct you to the forums for questions, however there isn’t really a resource from a Windows centric developer to other Windows gurus. As I 20 year Window veteran, I am going to try to fill that gap with a few blog posts entitled “Amazon EC2 for Windows The Basics”
I know it is suppose to work, and I know this will get fixed. However, as of December 2009:
“Sometimes windows instances do not start.”
I don’t want to be negative or point out a bug, however it is really frustrating when you don’t know this might be the case and it happens to you. It is less frustrating it you know this might happen and what do about it.
The first thing is to figure out if the instance started or not. The way to do this is look at the system log from the AWS management console. It should look like something like this:
It should say: “Windows is Ready to use” and it should have a password entry. If is blank then the instance didn’t start correct.
The password entry is very important. If you are starting a new instance from one of the provided Amazon EBSs or AMIs then the instance will come with a preset administrator password that can be decrypted with your key pair that you created to start that instance. The AWS Management consoles reads the password from the System Log and it gives you an option of decrypting the password. If you don’t get this password line in the System Log then the “Get Windows Password:” option will come up with a blank dialog. Without the password line you can’t get the administrator password to log into the box. If you are booting one of your saved AMIs or EBSs then you already know the password and don’t need this line.
Watch the system log for updates. That being said
“You will need to wait for the instance to start, up to 5 minutes for an ESB image.”
Even after the management console says the instance is in the started, it still takes a few minutes to get the required output in the system log. After ten minutes you need to reboot the instance – only if you know the password or the system log has displayed the password. Rebooting a instance that hasn’t displayed the password will not help. Rebooting doesn’t reset the password.
If you see a blank System log and have waited 10 minutes (for an ESB image), then you need to terminate and restart. If you see the password in the System Log, however not the “Windows is Ready to use” then you can try rebooting (which works better then terminating).
Saved AIM images that have been saved to your S3 account can take up to 20 minutes to boot.
{6230289B-5BEE-409e-932A-2F01FA407A92}
There is a lot of documentation about running Amazon EC2 and if you are having trouble Amazon will quickly direct you to the forums for questions, however there isn’t really a resource from a Windows centric developer to other Windows gurus. As I 20 year Window veteran, I am going to try to fill that gap with a few blog posts entitled “Amazon EC2 for Windows The Basics”
For me EC2 is all about replacing servers with cloud computing and when I get rid of my old servers it is my Linux friends that take them. As Windows IT we always buy the latest and greatest servers and they seem super fast when we buy them. I get good mileage out of my servers, usually 5 to 10 years. However, when I go to sell them it seems that they only fetch $50. However, there is always some Linux geek ready to snatch it up and replace a Pentium 3 server he has in his basement. Doing so he brags: “I can run 500 web sites on this server” and off he goes into his cave. The point of this rabble: Windows needs more server horsepower then Linux and yes to their benefit Linux geeks can get more from less.
“When running a EC2 instance for Windows never choose the small type of instance”
When you go to launch an instance you are given a choice between small (m1.small, 1.7GB) and medium (c1.medium, 1.7GB)* . Choose the medium instance.
Medium instances boot faster, install faster, and run faster. It is not like the regular and premium gasoline at the gas station, medium instances are faster. You can compare it to a 4 cylinder automobile and a 6 cylinder. For me the savings in time is worth the price.
{6230289B-5BEE-409e-932A-2F01FA407A92}
* These are the instance types at the time of writing, December 2009.
If you know me you will know that I love GUIDs and use them for everything. One of the things I use them for is to create a unique identifier to error messages in my code. I have override the Exception class adding my own method that looks like this:
public ProviderException(System.Guid error, string message, Exception innerException)
: base(String.Format(CultureInfo.CurrentCulture, "{{{0}}}: {1}", error.ToString(), message, innerException))
{
}
Basically, anytime that I want to throw an exception in my code, I do so with a unique GUID. The calling code looks like this:
catch(SoapException soapException)
{
throw (new ProviderException(new Guid("{513EEF48-8C02-4135-9344-2A401EAF2112}"), soapException.Message, soapException));
}
I have the option of creating my own message, and I always pass the exception I am trapping as the inner exception. Notice that I have a hard coded GUID in my code. It is not Guid.NewGuid() – since this would create a different GUID every time the error happened. What I want is a unqiue identifier to the error line in the code.
My client application catches all the errors and presents the user with this dialog:
Notice the more information link at the bottom of the dialog. If the user’s click on this link a browser window is open to:
http://www.bigdrive.net/Error/513eef48-8c02-4135-9344-2a401eaf2112
Which is an ASP.NET MVC page with more information about this issue and a forum for user discussion. This allows my company to update the error information without having the user upgrade their installed application. It also allows us to post solutions on the web site.
The user can also do a search for the error message using the GUID and see everyone else that has blogged, commented, or complained about the error.
Notice also that I might not know all the errors that will happen in my application or how often, however I can get statistics on how many times the error information is requested from the web site, focusing my bug fixing efforts for the next revision.
{6230289B-5BEE-409e-932A-2F01FA407A92}
This is the code I have:
cd %PROJECTROOTLOCAL%\Installation\AmazonS3\AmazonS3\
del /s /f /q *.*
What I want to do is delete all files in the Installation\AmazonS3\AmazonS3 directory. However what if the directory doesn’t exist? Well I get this error on the first line:
The system cannot find the path specified.
And the current directory isn’t changed. Which leaves me at the root of the my project. Which means that the second command deletes all the files in my project. Not good. The fix is this:
mkdir %PROJECTROOTLOCAL%\Installation\AmazonS3\AmazonS3\
cd %PROJECTROOTLOCAL%\Installation\AmazonS3\AmazonS3\
del /s /f /q *.*
Or is there something better?
{6230289B-5BEE-409e-932A-2F01FA407A92}
In Vista and newer Windows operating systems you can implement a IPropertyStore interface for your Shell Namespace Extensions. One of the things that is asked of your items is what is their PKEY_ThumbnailCacheId. The PKEY_ThumbnailCacheId is used to determine if the Thumbnail image can be used from the thumbnail cache or if it has to be regenerated by the Shell. There is no documentation about: PKEY_ThumbnailCacheId. However in the propkey.h file in the Windows SDK it states:
// Unique value that can be used as a key to cache thumbnails.
// The value changes when the name, volume, or data modified
// of an item changes.
However, the return value of IPropertyStore::GetValue is a PROPVARIANT, which is much like a VARIANT, it can be almost anything. So what does the shell expect you to return from the GetValue method when it asks for PKEY_ThumbnailCacheId? A hint can be found in the definition of IThumbnailCache::GetThumbnailByID method. The unique identifier comes in as a type of WTS_THUMBNAILID – equal to BYTE rgbKey[ 16 ].
So a good key would be an array of bytes with a length of 16. Where do we get that from? Well it just so happens that in propvarutil.h there is a function called: InitPropVariantFromGUIDAsBuffer which takes a GUID and creates a byte array with the length of 16. Since sizeof(GUID) is equal to 16.
So if you use InitPropVariantFromGUIDAsBuffer() method to set the PROPVARIANT in the IPropertyStore::GetValue call with a unique GUID for your shell item everything will work as expected.
Make sure you change the GUID every time the image changes, otherwise you might be showing a thumbnail with old data.
By the way, 16 bytes is also what is returned from an MD5 hash of a byte array.
{6230289B-5BEE-409e-932A-2F01FA407A92}
When writing a Shell Namespace Extension with DefView it is important to realize that IShellFolderViewCB is not optional for proper updates of the DefView.
If you construct your DefView like this:
::SHCreateShellFolderView(pcsfv, (IShellView **)ppvReturn);
One of the parameters of SFV_CREATE is psfvcb which points to an instance of IShellFolderViewCB. IShellFolderViewCB is a callback interface that you can use to get notified of what is happening in the DefView.
If you don’t care what DefView is doing it is tempting to leave it NULL. However, if you do this the DefView will not update correctly on calls like this:
::SHChangeNotify(SHCNE_MKDIR, SHCNF_IDLIST | SHCNF_FLUSH, *ppidl, NULL);
So always implement a IShellFolderViewCB interface with the minimum functionality when using the DefView.
{6230289B-5BEE-409e-932A-2F01FA407A92}
<%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> <% if (Request.IsAuthenticated) { %> Welcome <b><%= Html.Encode(Page.User.Identity.Name) %></b>! [ <%= Html.ActionLink("Log Off", "LogOff", "Account") %> ] <% } else { %> [ <%= Html.ActionLink("Log On", "LogOn", "Account") %> ] <% } %>However, this isn't 100% correct. It just so happens that Page.User.Identity.Name in their example is the User's name, however really Page.User.Identity.Name in most cases is a unique (primary) key to a users table. This is a little cleaner:
<%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> <% if (Request.IsAuthenticated) { %> Welcome <b><%= Html.Encode(Membership.GetUser(Page.User.Identity.Name).UserName) %></b>! [ <%= Html.ActionLink("Log Off", "LogOff", "Account") %> ] <% } else { %> [ <%= Html.ActionLink("Log On", "LogOn", "Account") %> ] <% } %>This change makes sure that we are getting the username (Display Name) from the membership provider regardless of what is used for the key to the provider. {6230289B-5BEE-409e-932A-2F01FA407A92}
<INPUT TYPE=”TEXT” NAME=”Login” /><br /> <INPUT TYPE=”PASSWORD” NAME=”Password” />This HTML delivers the login and password to the browser in clear text. The web application takes the password and concatenates it onto a stored piece of random text (called salt) and then hashes the complete string. This hash is compared with the hash store for that login and if they match then the web application can assume that the user knows their password. If the web developer implemented good security for his system then he never stores the password, only a hash of the salt and the password. The salt helps make every hash unique. Why do web developers hash passwords? Well, most people only have four to five passwords that they use on roughly a hundred sites. If one of those sites is compromised and the password is in clear text then that password can be tried on other sites to gain access as that user. It also limits the harm an employee can cause if they have access to every password. Since you don’t know if the web site you are entering your password into is storing it hashed or in clear text it is in your best interest to change your password for every web site. However, keeping track of all those passwords becomes a maintenance nightmare. If you can’t remember them all you will need to write them down – and then where do you store them to keep them safe? So here is the big idea: Why doesn’t the browser hash the password before it sends it to the web site? This way the user would know that the web site couldn’t store his/her password as clear text because the web site would never get the clear text from the browser. The web developer would just store what they get from the browser, the hash of the password and compare that hash every time the user logs in. If the browser is going to hash the password, it has to know the salt in order to create a unique hash. This is the tricky part. Every user needs to have a different salt and that salt needs to be the same for that user every time. The salt can’t change based on browser, or computer, or location of the computer. The salt must be different for each web site – since having the same salt and password would mean the same hash for every web site which is the same security risk as the same password stored as clear text. My suggestion is that the salt be the domain name of the site concatenated with the login. The domain name and the login are known by the browser when the form is submitted; it is always unique per user and would be the same no matter where they logged in from. Plus each salt would be different for each domain. An evil doer that compromised the web application reading the password hash from the web database wouldn’t know the hash for another web site. Basically it is the same as making the browser generate a unique password for every web site. How would the browser do this securly? How would the HTML have to be rewritten (i.e. how does the browser know which input is the login)? Would the DOM restrict client side scripting from reading the password? These are all questions I don’t have the answers for. However, I do know that password handling on hundreds of web site is going to be an issue that needs solving in the next four to five years. {6230289B-5BEE-409e-932A-2F01FA407A92}
<head> <meta name="Description" content="" id="metaDescription" runat="server" /> </head>Here I am trying to add a meta description the head of the HTML so that the search engines know what this page is about. What I was doing is have the page check the type of the master class on the pages OnInit event and if it was the right master class, find the public property in the MasterPage that would set the MetaDescription. When the MasterPage pre-rendered it would insert the value of that property into the content attribute of the server side control. With ASP.NET MVC you don't have code behind. It was not obvious to me (however it makes sense now) that the MasterPage has access to all the ViewData, so in MVC the above looks like this:
<head> <meta name="Description" content="<%=ViewData["MetaDescription"]%>"/> </head>All I have to do in this case is set the MetaDescription of the ViewData in the controller and it will get filled in on this MasterPage. It is not obvious also that if I don't set the ViewData for the MetaDescription then the content attribute gets an String.Empty. You might say: "That is how it should work?" Well it should, unless you consider that ViewData holds objects and has an arbitrary number of entries. So referencing an unset entry should give a KeyNotFoundException or maybe a NullReferenceException, however this has been cleanly handled in ASP.NET MVC. Which means that as I program the View, I don't need to worry about all the controllers that use this view setting the MetaDescription. If they set it great I will use it, otherwise it is blank. {6230289B-5BEE-409e-932A-2F01FA407A92}
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Web.Routing; using System.Web.Mvc; using System.Text.RegularExpressions; namespace WebUtility { public class RegexRoute : RouteBase { public Regex Regex { get; private set; } public String[] Groups { get; private set; } private IRouteHandler RouteHandler { get; set; } public String Controller { get; private set; } public String Action { get; private set; } /// <summary> /// Creates A Regular Expression Route /// </summary> /// <param name="regex">Regular Expression To Use Against /// the AbsolutePath of the request.</param> /// <param name="groups">roups In The Match</param> /// <param name="routeHandler">The name of the Handler /// to use for this route.</param> public RegexRoute(Regex regex, String[] groups, IRouteHandler routeHandler) { Regex = regex; Groups = groups; RouteHandler = routeHandler; } /// <summary> /// Constructor that allows you to specify the /// controller and the action. /// </summary> /// <param name="regex">Regular Expression</param> /// <param name="groups">Groups In The Match</param> /// <param name="controller">Explicit Name Of The Controller</param> /// <param name="action">Explict Name Of The Action</param> public RegexRoute(Regex regex, String[] groups, String controller, String action) { Regex = regex; Groups = groups; Controller = controller; Action = action; } /// <summary> /// Constructor that assume controller and /// action are in the groups. /// </summary> /// <param name="regex"></param> /// <param name="groups"></param> public RegexRoute(Regex regex, String[] groups) { Regex = regex; Groups = groups; List<String> list = new List<String>(groups); if (!list.Contains("controller")) throw (new Exception( "Controller group expected in regular expression match")); if (!list.Contains("action")) throw (new Exception( "Action group expected in regular expression match")); } public override RouteData GetRouteData( System.Web.HttpContextBase httpContext) { MatchCollection matchCollection = Regex.Matches(httpContext.Request.Url.AbsolutePath); switch (matchCollection.Count) { case 0: // WWB: There Is No Match -- // This Route Doesn't Handle This URI return (null); case 1: // WWB: FillOut The Route Data RouteData routeData = new RouteData(); routeData.Route = this; if (RouteHandler != null) routeData.RouteHandler = RouteHandler; else routeData.RouteHandler = new MvcRouteHandler(); if (!String.IsNullOrEmpty(Controller)) routeData.Values.Add("controller", Controller); if (!String.IsNullOrEmpty(Action)) routeData.Values.Add("action", Action); // WWB: No Group Names, No Values Outputted. if (Groups != null) { // MSDN:The GroupCollection object returned // by the Match.Groups property // always has at least one member. if (matchCollection[0].Groups.Count != Groups.Length) throw (new Exception(String.Format( "{0} contains {1} groups when matching {2}, however " + "there are only {3} mappings. There needs to be an " + "equal number of mappings to groups, note that " + "there is always one group the whole string.", httpContext.Request.Url.AbsoluteUri, matchCollection[0].Groups.Count, Regex.ToString(), Groups.Length))); // WWB: Map All The groups into the values for the RouteData for (Int32 index = 0; index < matchCollection[0].Groups.Count; index++) { routeData.Values.Add(Groups[index].ToString(), matchCollection[0].Groups[index]); } } return (routeData); default: throw (new Exception( String.Format("There Multiple Matches For {0} on {1}," + "which means that the regular expression has more " + "then one non-overlapping match.", Regex.ToString(), httpContext.Request.Url.AbsoluteUri))); } } public override VirtualPathData GetVirtualPath( RequestContext requestContext, RouteValueDictionary values) { throw new NotImplementedException(); } } }{6230289B-5BEE-409e-932A-2F01FA407A92}
routes.Add(new Route( "{*path}/{name}.htm", new NameRouteHandler()));However, I got this error message: A catch-all parameter can only appear as the last segment of the route URL. Which was an issue; the path which I tossed away had a lot of optional subdirectories which didn't map well to the MVC Route syntax. What I really wanted was to treat the routing syntax like a regular expression. To make this happened I created my Route class subclassed from RouteBase that took a regular expression. So now route declaration looks like this:
routes.Add(new RegexRoute( new Regex(@".*/(.*)\.htm"), new String[] {"all", "name"}, new NameRouteHandler()));The second parameter isn't the defaults, it is the group name. Regular Expression have the concept of groups within a match and the parenthesizes in the regular expression tell what to group up -- in this case the name. With group the complete match is always the first group. The RegexRoute class trys to make the AbsoluteUri of the request with the regular expression if it can't it returns null. If it can it fills out the RouteData class and returns it. For all the groups it creates a value entry in RouteData that gets passed to the controller. The value name is the name listed in the second parameter. I haven’t implemented GetVirtualPath, which I will do as I learn more about MVC. It is used for the TDD testing to generate URLs and I am guessing will be the tricky part of this class to implement. Here is what the class looks like:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Web.Routing; using System.Text.RegularExpressions; namespace WebUtility { public class RegexRoute : RouteBase { public Regex Regex { get; private set; } public String[] Groups { get; private set; } private IRouteHandler RouteHandler { get; set; } /// <summary> /// Creates A Regular Expression Route /// </summary> /// <param name="regex">Regular Expression To Use Against /// the AbsoluteUri of the request.</param> /// <param name="groups">An Array Of gorup names to use /// for Value namdes of the RouteData</param> /// <param name="routeHandler">The name of the Handler /// to use for this route.</param> public RegexRoute(Regex regex, String[] groups, IRouteHandler routeHandler) { Regex = regex; Groups = groups; RouteHandler = routeHandler; } public override RouteData GetRouteData( System.Web.HttpContextBase httpContext) { MatchCollection matchCollection = Regex.Matches(httpContext.Request.Url.AbsoluteUri); switch (matchCollection.Count) { case 0: // WWB: There Is No Match -- // This Route Doesn't Handle This URI return (null); case 1: // MSDN:The GroupCollection object returned // by the Match.Groups property // always has at least one member. if (matchCollection[0].Groups.Count != Groups.Length) throw (new Exception(String.Format( "{0} contains {1} groups when matching {2}, however " + "there are only {3} mappings. There needs to be an " + "equal number of mappings to groups, note that " + "there is always one group the whole string.", httpContext.Request.Url.AbsoluteUri, matchCollection[0].Groups.Count, Regex.ToString(), Groups.Length))); // WWB: FillOut The Route Data RouteData routeData = new RouteData(); routeData.Route = this; routeData.RouteHandler = RouteHandler; // WWB: No Group Names, No Values Outputted. if (Groups != null) { // WWB: Map All The groups into the values for the RouteData for (Int32 index = 0; index < matchCollection[0].Groups.Count - 1; index++) { routeData.Values.Add(Groups[index].ToString(), matchCollection[0].Groups[index + 1]); } } return (routeData); default: throw (new Exception( String.Format("There Multiple Matches For {0} on {1}," + "which means that the regular expression has more " + "then one non-overlapping match.", Regex.ToString(), httpContext.Request.Url.AbsoluteUri))); } } public override VirtualPathData GetVirtualPath( RequestContext requestContext, RouteValueDictionary values) { throw new NotImplementedException(); } } }{6230289B-5BEE-409e-932A-2F01FA407A92}
GEvent.addListener(marker , ‘click’, function() { … } );The first parameter is the marker instance, the second is the event, and the last parameter is the function to execute when the click event takes place on the marker. However, notice that there isn’t any place to send in parameters. addListener doesn’t have an optional parameters property, and the method doesn’t allow you to create a function that takes parameters – since it wouldn’t know what to pass there. Which leaves me with the problem: how do I pass in the HTML that I want to display on the click? I could create a global array of html data that I fill and reference that global from inside the function, however this is messy. The answer is to tell the marker what the HTML is and then extract it from the marker. However, the marker is of type GMarker which is a Google class. If this was C# I would have to subclass the GMarker object and add another property. However, subclassing (or the equivalent) is way too complicated in JavaScript. The solution is to understand that you can just add a property to an instance of the Javascript object without having to declare the property in the object. In fact you can just tag on the HTML data and get it later – even though if it is someone else’s object. So my code looks like this:
var marker = new GMarker(point); marker.html = ‘<b>’ + title + ‘</b>’; GEvent.addListener(marker, ‘click’, function() { this.openInfoWindowHtml(this.html)});In the function, this refers to the instance of the object which the action was taken, which in this case is the marker. So this.html has the information for this marker’s html, which is displayed in the Info window. {6230289B-5BEE-409e-932A-2F01FA407A92}