ErrorCode=UserErrorInvalidColumnMappingColumnNotFound,Column 'xxx' specified in column mapping cannot be found in source data.

{tocify} $title={Table of Contents}

Error/Issue:

Currently working on a pipeline which is copying data from a csv file and loading it into a database Table, so for that I configured the CopyData activity 

i. Configured  the source settings and to preview data provided path to a sample csv file 

ii. Configured sink settings - provided the Table name and pre-copy scripts

iii. Once both have defined, next was the mapping between them and for that you have to click on the import schema, so what it does is basically gets the schema from this source file which you have provided and from database table you have mentioned in the sink side.

After it gets the schema from both the side, it presents you with the auto suggested links so if you find those ok, use  that else you can change it as per your requirement .

Automapping in Azure Data Factory



so that's what I did so but then when I started to test, got below error. 

column mapping cannot be found in source






Why it happened


ErrorCode=UserErrorInvalidColumnMappingColumnNotFound,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Column 'COUNTDATE' specified in column mapping cannot be found in source data.,Source=Microsoft.DataTransfer.ClientLibrary,'



What does this error say, it says that a specific column Countdate in the column mapping cannot be found in the source data, but the question is I had created the schema from the file which I had so why the error?


So the reason is that when I created the schema of the source side the referred sample file had a column with name as Countdate but when started with testing, the actual test files had Date as column name.

Thus at runtime when column mapping was to be applied the engine didn't find the expected column as per the mapping instruction and thus the error.


What to do



In order to avoid/solve such error, you have to make sure that the mapping gets the required input in this source file, that means whatever schema you have imported, the files has to adhere to it - has to match and if there is any mismatch it will give you an error.


In case if we change the source file or if you modify it then you have to make sure that you apply or you make changes in the mapping as per the changes you have done in your file.


Like in my case where I found the difference between the sample file and actual file,  I needed to actually do come to the mapping section and import the schema again (using actual file) and do the mapping and yes that's what I did and the issue was fixed.


If you think there is a better way to solve this problem please comment !!





Do share if you find this helpful .......
 
                          Knowledge Sharing is Caring !!!!!!


Learn More about some more Azure Data Factory errors



Post a Comment

If you have any suggestions or questions or want to share something then please drop a comment

Previous Post Next Post