Clear Filters
Clear Filters

Ideal way to import csv data and create column vectors that will be variables i want to work with

51 views (last 30 days)
I have a .csv file that i wish to import and create column vecotrs (variables) that i can manipulate further with my code. Below is my code. I also wish to remove NaN values from all columns that have them. In additon i want to muliply all of the EMG data columns by 1000 which is why that number is there towards the end of my code. This code seems to work but i was told to not use eval in code writing and just wanted to see if someone can check it and offer any other suggestions to code it differently.
Thank you in advance!
%read the imported matrix table and create data struct to store variables
%Create variables in the workspace with the same names as column names and
%assign the corresponding columns of the matrix as their values
matrix = readmatrix(filename,'HeaderLines', 7);
data = struct();
for i = 1:numel(column_names)
variable_name = column_names{i};
assignin('base',variable_name, matrix(:,i));
%Remove NaN
for i = 1:numel(EMGVariables)
variable_name = EMGVariables{i};
variable = 1000*eval(variable_name);
assignin('base', variable_name, variable);
  1 Comment
Stephen23 on 23 Jan 2023
"This code seems to work but i was told to not use eval in code writing..."
EVAL and ASSIGNIN and various others:
"...and just wanted to see if someone can check it and offer any other suggestions to code it differently"
Use a table.

Sign in to comment.

Accepted Answer

dpb on 23 Jan 2023
opt=detectImportOptions(filename); % get an import options object
opt.MissingRule='omitrow'; % skip any incomplete rows
opt.ImportErrorRule='omitrow'; % ditto any bad conversions...
tData=readtable(filename,opts,'HeaderLines', 7); % and bring in the table
tData.Properties.VariableNames=column_names; % and name variables as desired
tData{:,EMGVariables}=1000*tData{:,EMGVariables}; % and scale given subset by variable names
As the above illustrates, you can dynamically address any/all variables by name without there being a million different variables nor using very slow and difficult to debug eval expressions by the simple expedient of using a table.
Using the import object also lets you clean the table of any bad data on import with the important feature missing in the previous that the data as independent variables before was not necessarily consistent in length/location between the samples -- a missing value in one was not being compensated for in another column.
dpb on 24 Jan 2023
"...Seems i cannot attach the file as it is too large..."
Would only need first 100 lines or less to it in the editor and save just a piece of it to a new file and attach that. Or, as .csv it probably would zip down to small amount although somewhat less convenient.
"...Remove NaN value from each data column since it is all sampled at different frequencies..."
That may need some consideration of how you import the data, then. Probably the first thing to do after import would be to resample all to a common time base. But it 'splains what wasn't evident in the original post about NaN and independently removing NaN data. Would need to see the file to really tell the "ideal" way given the additional information; the above was based on erroneous (or at least incomplete) assumptions about the file structure.

Sign in to comment.

More Answers (0)




Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!