This is how I add a folder to the matlab path on command window:
if(~exist('varycolor'))
addpath(genpath('~/MatlabAddon'));
end
The if condition checks for the existence of the function 'varycolor'. If it does not exists, it loads the folder where this file is present.
Tuesday, March 16, 2010
Thursday, March 4, 2010
Deleting empty rows in a matrix
>> a = [1,2,3;4,5,6;0,7,8;0,0,0;0,0,0]
a =
1 2 3
4 5 6
0 7 8
0 0 0
0 0 0
>> b = a(any(a,2),:)
b =
1 2 3
4 5 6
0 7 8
>> b = a(all(a,2),:)
b =
1 2 3
4 5 6
a =
1 2 3
4 5 6
0 7 8
0 0 0
0 0 0
>> b = a(any(a,2),:)
b =
1 2 3
4 5 6
0 7 8
>> b = a(all(a,2),:)
b =
1 2 3
4 5 6
Wednesday, January 27, 2010
K-Medoid Algorithm in Matlab
According to Wikipedia, k-medoids algorithm is a clustering algorithm related to the k-means algorithm. In contrast to k-means algorithm, k-medoids chooses data points as centres. I wrote a Matlab program for implementing this algorithm. I am posting it here in the hope that people may find it useful.
Save this file as "kmedoid.m". Now we can create clusters for a given data set as follows :
Above command iterates for 1000 times to create 5 clusters for a given data.
Idx - the indices of data points selected as cluster centres.
C - cluster centres selected from the data sets
err - cost for each iteration. It can plotted to see if the error is decreasing.
function [IDX, Cluster, Err] = kmedoid2(data, NC, maxIter, varargin) % Performs clustering using Partition Around Medoids (PAM) algorithm % Initial clusters must be selected from the available data points % Usage % [IDX,C,cost] = kmedoid(data, NC, maxIter, [init_cluster]) % % Input : % data - input data matrix % NC - number of clusters % maxIter - Maximum number of iterations % init_cluster - Initial cluster centers (optional) % init_index - Index of initial clusters % % Output : % IDX - data point index which became cluster centres % C - Cluster centers % Err - cost associated with the clusters for all iterations % ------------------------------------------ % Size of data dsize = size(data); % Number of data points L = dsize(1); % No. of features NF = dsize(2); % Cluster size csize = [NC, NF]; if (L < NC) error('Too few data points for making k clusters'); end if(nargin > 5) error('Usage : [IDX,C,error] = kmedoid(data, NC, maxIter, [init_cluster], [init_idx])'); elseif(nargin == 5) vsize1 = size(varargin{1}); vsize2 = length(varargin{2}); if(isequal(vsize1,csize)) Cluster = varargin{1}; else error('Incorrect size for initial cluster'); end if(vsize2 == NC) IDX = varargin{2}; else error('Incorrect size for initial cluster index'); end elseif(nargin == 4) error('You must provide two optional arguments: init_cluster, init_idx'); elseif(nargin == 3) % no initial cluster provided IDX = randint(NC,1,L)+1; Cluster = data(IDX,:); % Initialize the initial clusters randomly else display('Usage : [IDX,C] = kmedoid(data, NC, maxIter, [init_cluster], [init_idx]'); error('Function takes at least 3 arguments'); end %Array for storing cost for each iteration Err = zeros(maxIter,1); Z = zeros(NC, NF, L); for i = 1:L Z(:,:,i) = repmat(data(i,:),NC,1); end FIRST = 1; total_cost = 0; for Iteration = 1:maxIter %disp(Iteration); if(FIRST) % First time, compute the cost associated with the initial cluster C = repmat(Cluster,[1,1,L]); B = sqrt(sum((abs(Z-C).^2),2)); %Euclidean distance B1 = squeeze(B); [Bmin,Bidx] = min(B1,[],1); cost = zeros(1,NC); for k = 1:NC cost(k) = sum(Bmin(Bidx==k)); end total_cost = sum(cost); FIRST = 0; % Reset the FIRST flag else % Not first time % change one cluster center randomly and see if the cost is % decreased. If yes, accept the change, otherwise reject the change while(1) Tidx = randint(1,1,L)+1; % find a random number betn 1 to L if(isempty(find(IDX==Tidx))) % Avoid previously selected centers break; end end % randomly change one cluster pos = randint(1,1,NC) + 1; OLD_IDX = IDX; % Preserve the old index list IDX(pos) = Tidx; %Assign cluster centers PCluster = Cluster; Cluster = data(IDX,:); % For each data point, find the cluster which is closest to it % and compute the cost obtained from this association C = repmat(Cluster,[1,1,L]); %B = sum(abs(Z-C),2); %Manhattan distance B = sqrt(sum((abs(Z-C).^2),2)); %Euclidean distance % Row - cluster (NC) % Column - each data point 1, 2, ... L B1 = squeeze(B); % For each data point, find the nearest cluster % size(Bmin) = 1xL [Bmin,Bidx] = min(B1,[],1); cost = zeros(1,NC); for k = 1:NC cost(k) = sum(Bmin(Bidx==k)); end previous_cost = total_cost; total_cost = sum(cost); if(total_cost > previous_cost) % cost increases % Bad choice, restore the previous index list IDX = OLD_IDX; total_cost = previous_cost; Cluster = PCluster; end end Err(Iteration) = total_cost; end if(nargout == 0) disp(IDX); elseif(nargout > 3) error('Function gives only 3 outputs'); end return
Save this file as "kmedoid.m". Now we can create clusters for a given data set as follows :
[Idx, C,err] = kmedoid(data, 5, 1000);
Above command iterates for 1000 times to create 5 clusters for a given data.
Idx - the indices of data points selected as cluster centres.
C - cluster centres selected from the data sets
err - cost for each iteration. It can plotted to see if the error is decreasing.
Tuesday, January 26, 2010
Cycle through Markers in Matlab Plot
figure(2);
markers = 'ox+*sdv^<>ph';
colorset = varycolor(NC);
hold all;
for i = 1 : NC
plot(C(:,1), C(:,2), 'LineStyle', 'None', 'Marker', markers(i), 'Markersize', 10);
end
markers = 'ox+*sdv^<>ph';
colorset = varycolor(NC);
hold all;
for i = 1 : NC
plot(C(:,1), C(:,2), 'LineStyle', 'None', 'Marker', markers(i), 'Markersize', 10);
end
Monday, January 18, 2010
Avoid loops in Matlab
Following examples demonstrate how one can avoid for loops in MATLAB to speed up execution :
- Vector assignment
x = rand(1,100);
%In spite of :
for k=1:100
y(k) = sin(x(k));
end
% We can use :
y(:) = sin(x(:));
- Subtracting a vector from each row/column of a matrix
a = [1 2 3]
b = magic(3);
c = repmat(a,3,1) - b;
- Use logical indexing to operate on a part of a matrix. Use "find" command wherever possible.
a = 200:-10:50;
% replace the value 120 by 0
a(find(a == 120)) = 0;
a = 200 190 180 170 160 150 140 130 0 110 100 90 80 70 60 50
Another example :
% [idx1, idx2] = find(angle < 0);
% idx = [idx2 idx2];
% for j = 1:length(idx)
% angle(idx(j,1),idx(j,2))=360+angle(idx(j,1),idx(j,2));
% end
can be replaced by following code :
angle(angle<0) = angle(angle<0)+360;
- Useful commands for matrices and vectors :
sum, norm, diff, abs, max, min, sort
Useful link in this direction
http://www.ee.columbia.edu/~marios/matlab/matlab_tricks.html
Thursday, October 22, 2009
Index of an array element closest to an arbitrary value
>> x = 1:100:1000;
The index of the array which is closest to say 130 is given by
>> [v,idx] = min(abs(x-130));
The answer is
v = 29, idx = 2
Friday, October 2, 2009
Effect of Controllability and Observability On Stability
In this post, I would post the email conversation that I had with my friends over the topic "the effect of controllability and observability on stability". I am thankful to the student Asmini who actually raised this question.
My response :
Indrani's Response:
Prem's response:
Yes... Indrani's answer is correct... There is no correlation between stability and controllability. A controllable system can be either unstable or stable.
I am still waiting for Awhan's response and I would update this post once I hear from him. But following thing can be concluded about this topic.
My response :
In simple terms, controllability is the virtue of a system by which one can control the state of the system. In technical terms, controllability is the ability to be able to control each state variable or to be able to take it from one point in state space to another point. There is subtle difference between the terms "Controllability" and "Reachability". Don't worry about this second term at present. Both are more or less same. If a system is controllable, that means we can design a controller to control all states. That's good for us, is n't it?
On the other hand, observability is the virtue of system which enables us to observe all the states. I mean if a system is observable, we can use sensors to record all state variables. Note that this is not always the case. We may not have access to all states of a system. Again, it is good to have a completely observable system.
Now coming to your question, what is the effect of controllability and observability on stability?
Well, I think if a system is completely controllable or observable, then it is possible to design controllers which can stabilise the system. But if the system is either uncontrollable or unobservable, then it would be difficult to stabilise it. In some case, it would be impossible to design a controller to stabilize the system.
Indrani's Response:
I assume that we know the meaning of "stability of a system". If a system
dynamics is not stable we can make it stable by application of a proper
control input. On the other hand, controllability of a system implies
that it is possible to force the system to a particular state by
application of a control input. If a state is uncontrollable then no
input will be able to control that state. Thus if a system state is not
controllable as well as the corresponding dynamics is not stable then it
will not be possible to make that state stable by applying a control
input.
Similarly if a state is not observable then the controller will not be
able to determine its behavior from the system output and hence not be
able to use that state to stabilize the system. Thus controllability and
observability of a system are two important properties which are to be
considered before designing a controller.
Prem's response:
Yes... Indrani's answer is correct... There is no correlation between stability and controllability. A controllable system can be either unstable or stable.
But controllability means that if the system is unstable then you can stabilize it using proper state feedback. Observability indicates whether we can deduct the states from the observed output so that states will be available for stabilization.
If some of the nodes are not observable and those nodes are already stable then also we can stabilize.
so we can classify into two cases.
i) An observable and controllable system can be stabilizable with state feedbacm.
ii) If some of the states are unobservable and those nodes are stable, then again we can design state feedback to design the stabilizing the controller with observable states.
I am still waiting for Awhan's response and I would update this post once I hear from him. But following thing can be concluded about this topic.
- Controllability and Observability properties are desirable for controller design. This is the first thing to be checked before designing a controller.
- There is no correlation between Controllability and Observability with Stability of the system. Stability is the property of the system (Plant + Controller). On the other hand controllability and Observability are the properties of Plant alone.
Subscribe to:
Posts (Atom)