标签归档:python

python读取mat

Matlab:

在Matlab中首先使用save函数保存变量。为了方便python读出,使用参数version指定保存数据的版本为-v7

save函数的参考资料:[1]

Python:

python端读出使用scipy库

参考资料:[2] loadmat

import scipy.io as sio
data = sio.loadmat('matlab.mat')

是的,最近在做口腔CT的伪影去除。当然是用流行的深度学习的方法啦~~

如何翻转图片

方法一: Numpy

import numpy as np
image_flipped_lr = np.fliplr(image) #水平翻转
image_flipped_ud = np.flipud(image) #上下翻转

参考资料:numpy.fliplr numpy.flipud

方法二: OpenCV

image_flipped = cv2.flip(image,1)

cv2.flip(src, flipCode[, dst]) → dst

参考资料:OpenCV

PS:设置 plt.imshow图片大小:

plt.figure(figsize=(15,5)) #设置图片显示大小
plt.subplot(1,2,1)
plt.imshow(image)
plt.subplot(1,2,2)
plt.imshow(image_flipped)

DeepLearning:Linear Class解释

class Linear(Node):
    """
    Represents a node that performs a linear transform.
    """
    def __init__(self, X, W, b):
        # The base class (Node) constructor. Weights and bias
        # are treated like inbound nodes.
        Node.__init__(self, [X, W, b])

    def forward(self):
        """
        Performs the math behind a linear transform.
        """
        X = self.inbound_nodes[0].value
        W = self.inbound_nodes[1].value
        b = self.inbound_nodes[2].value
        self.value = np.dot(X, W) + b

    def backward(self):
        """
        Calculates the gradient based on the output values.
        """
        # Initialize a partial for each of the inbound_nodes.
        self.gradients = {n: np.zeros_like(n.value) for n in self.inbound_nodes}
        # Cycle through the outputs. The gradient will change depending
        # on each output, so the gradients are summed over all outputs.
        for n in self.outbound_nodes:
            # Get the partial of the cost with respect to this node.
            grad_cost = n.gradients[self]
            # Set the partial of the loss with respect to this node's inputs.
            self.gradients[self.inbound_nodes[0]] += np.dot(grad_cost, self.inbound_nodes[1].value.T)
            # Set the partial of the loss with respect to this node's weights.
            self.gradients[self.inbound_nodes[1]] += np.dot(self.inbound_nodes[0].value.T, grad_cost)
            # Set the partial of the loss with respect to this node's bias.
            self.gradients[self.inbound_nodes[2]] += np.sum(grad_cost, axis=0, keepdims=False)

1. the loss with respect to inputs
self.gradients[self.inbound_nodes[0]] += np.dot(grad_cost, self.inbound_nodes[1].value.T)
Cost对某个Node的导数(gradient)等于Cost对前面节点导数的乘积。

解释一:

np.dot(grad_cost, self.inbound_nodes[1].value.T)

对于Linear节点来说,有三个输入参数,即inputs, weights, bias分别对应着

self.inbound_nodes[0],self.inbound_nodes[1],self.inbound_nodes[2]

So, each node will pass on the cost gradient to its inbound nodes and each node will get the cost gradient from it’s outbound nodes. Then, for each node we’ll need to calculate a gradient that’s the cost gradient times the gradient of that node with respect to its inputs.

于是Linear对inputs的求导就是weights。所以是grad_cost*weights.grad_cost是Linear输出节点传递进来的变化率。

np.dot(self.inbound_nodes[0].value.T, grad_cost)

同理可推对weights的求导为inputs,于是gradient=grad_cost*inputs

np.sum(grad_cost, axis=0, keepdims=False)

而对于bias,Linear对bias求导恒为1.所以gradient=1*grad_cost

解释二:为何是+=

因为每一个节点将误差传递给每一个输出节点。于是在Backpropagation时,要求出每一个节点的误差,就要将每一份传递出去给输出节点的误差加起来。于是用+=。
于是可以理解为什么要for n in self.outbound_nodes: 目的是为了在每一个节点的输出节点里遍历。
If a node has multiple outgoing nodes, you just sum up the gradients from each node.

注意点一:
要区分Backpropagation 和Gradient Descent是两个步骤,我通过Backpropagation找到gradient,于是找到了变化方向。再通过Gradient Descent来最小化误差。

To find the gradient, you just multiply the gradients for all nodes in front of it going backwards from the cost. This is the idea behind backpropagation. The gradients are passed backwards through the network and used with gradient descent to update the weights and biases.

最终目的是:

Backpropagation只求了导数部分。Gradient Descent则是整个过程。

 

解决defaults::qt-5.6.2-vc14_3

使用conda 安装tensorflow的时候,出现了这个问题,这不是第一次的出现了。上一次出现,我选择重新安装miniconda,问题解决了。这次我决定去解决他!

ERROR conda.core.link:_execute_actions(330): An error occurred while installing package ‘defaults::qt-5.6.2-vc14_3’. UnicodeDecodeError(‘utf-8′, b’\xd2\xd1\xb8\xb4\xd6\xc6 1 \xb8\xf6\xce\xc4\xbc\xfe\xa1\xa3\r\n‘, 0, 1, ‘invalid continuation byte’) Attempting to roll back. UnicodeDecodeError(‘utf-8′, b’\xd2\xd1\xb8\xb4\xd6\xc6 1 \xb8\xf6\xce\xc4\xbc\xfe\xa1\xa3\r\n‘, 0, 1, ‘invalid continuation byte’)

ERROR conda.core.link:_execute_actions(319): An error occurred while installing package ‘defaults::qt-5.6.2-vc9_3’

python | 解决defaults::qt-5.6.2-vc14_3

这两篇都是同一种解决方法,但是当我试着他们的方法的时候,却出现了新的问题,即ModuleNotFoundError: No module named ‘chardet’

于是我直接用conda在root环境下安装chardet

conda install chardet
最后一切搞定,连上面两篇文章的添加内容都不需要~

深度学习初体会——fast-style-transfer

报了udacity的深度学习纳米学位课,虽然学费对我来说不少,但是想以后为自己就业多拓宽的路子还是咬牙分期付款。这也是第一次花这么多钱网络学习,希望能有所收获,更希望将来就业能多点机会。

这篇主要是第一次体会下深度学习。没有什么技术含量,只不过是记录一下过程罢了。

fast-style-transfer就是模仿名画的风格,把自己的照片转化成同样的风格。GitHub

1.Git clone

继续阅读

Lofter图片搬家

图片已经能够下载了,很好~但是缺一步,那就是把文章中的图片链接换成本地连接。当然不能手动了,用PYTHON来做这件事。

代码大概是这样的。


我的想法是,把所有图片链接全部换成是very9s/lofter/*.png的链接形式。好在图片不是很多。到时候在服务器上就直接在根目录下放个lofter文件夹吧。
看,图片链接已经改了。
/////////////////////////////////////////////////////////////////////////////////////

嗯,搬家差不多可以进行了。现在只要再在一些个别的细节做一些调整就可以了。整体框架已完成。

Python下载Lofter图片

现在文章已经能够迁移到Hexo了。但是文章里面的图片链接依旧是LOFTER上的,所以也要把图片搬出来。

1-下载图片

在原来的程序基础上添加下载图片的代码。

很直接,很单纯。。。直接用urlopen然后write。可是才下载一张就被服务器无情地拒绝了。

2-伪装

我就这么裸奔的跑过去,当然被拒绝了。

查找python的官方文档,发现一句话。解决办法就在这里了。

headers should be a dictionary, and will be treated as if add_header() was called with each key and value as arguments. This is often used to “spoof” the User-Agent header, which is used by a browser to identify itself – some HTTP servers only allow requests coming from common browsers as opposed to scripts. For example, Mozilla Firefox may identify itself as “Mozilla/5.0 (X11; U; Linux i686) Gecko/20071127 Firefox/2.0.0.11”, while urllib‘s default user agent string is “Python-urllib/2.6” (on Python 2.6).

——

《urllib.request》

所以我要伪装成浏览器。所以修改代码:

通过添加header来实现


3-大功告成

终于可以快速的下载图片了。图片搬家问题也解决了一大半了。


oh~对了,关于解析博文,由于XML中保持的是HTML数据,所以博文中图片的链接,我是用Beautiful Soup 4.2.0来解析的。

下一步,就是修改XML文件,把博文中的图片链接换成自己的图片链接。

嗯~剩下的下次再来吧。

为什么Python中的string不可变

Why are Python strings immutable?

There are several advantages.

  • One is performance: knowing that a string is immutable means we can allocate space for it at creation time, and the storage requirements are fixed and unchanging. This is also one of the reasons for the distinction between tuples and lists.

  • Another advantage is that strings in Python are considered as “elemental” as numbers. No amount of activity will change the value 8 to anything else, and in Python, no amount of activity will change the string “eight” to anything else.

大白话就是:

  1. 固定的内存空间更有效率

  2. 就像没有人会改变数值8的写法一样,字符串“eight”也不需要改变。